text
stringlengths
226
34.5k
import another program as module Question: i searched my problem on stack and i didt find the solution so i come here to asking u about that .im learnin python with the book 'OReilly.Introducing.Python' and in chapter 5 in module section .the author says u can save a program and use in in another program as module when the 2 programs are saved in 1 directory. this is the fist program using as module. report.py def get_description(): # see the docstring below? """Return random weather, just like the pros""" from random import choice possibilities = ['rain', 'snow', 'sleet', 'fog', 'sun', 'who knows'] return choice(possibilities) and the main program is this : import report description = report.get_description() print("Today's weather:", description) its a simple program i know when i want to import that it apears with this error: Traceback (most recent call last): File "H:\python\Lib\weather.py", line 1, in import report File "H:\python\Lib\report.py", line 2 """Return random weather, just like the pros""" ^ IndentationError: expected an indented block i tried to change directory and copy that to lib folder or scripts and this is my sys.path: H:\python\Lib C:\Windows\System32 H:\python\Lib\idlelib H:\python\python35.zip H:\python\DLLs H:\python\lib H:\python H:\python\lib\site-packages Answer: Just as the error says, you must ident the docstring: def get_description(): # see the docstring below? """Return random weather, just like the pros""" from random import choice possibilities = ['rain', 'snow', 'sleet', 'fog', 'sun', 'who knows'] return choice(possibilities)
Pyspark: how to streaming Data from a given API Url Question: I was given an API url, and a method getUserPost() which returns the data needed for my data processing function. I am able to get the data by using Client from suds.client as follow: from suds.client import Client from suds.xsd.doctor import ImportDoctor, Import url = 'url' imp = Import('http://schemas.xmlsoap.org/soap/encoding/') imp.filter.add('filter') d = ImportDoctor(imp) client = Client(url, doctor=d) tempResult = client.service.getUserPosts(user_ids = '',date_from='2016-07-01 03:19:57', date_to='2016-08-01 03:19:57', limit=100, offset=0) Now, each tempResult will contain 100 records. I want to stream the data from given API url to RDD for parallelized processing. However, after reading the pySpark.Streaming [documentation](https://spark.apache.org/docs/latest/api/python/pyspark.streaming.html) I can't find a streaming method for customized data source. Could anyone give me an ideal how to do so? Thank you. Answer: After a while digging, I found out how to solve the problem. I employed the use of [Kafka Streaming](http://kafka.apache.org/documentation.html#quickstart). Basically you need to create a producer from given API, specify topic and Port for communication. Then a consumer to listen to that specific topic and Port to start streaming the data. Note that the Producer and Consumer must be working as different threads in order to archive real-time streaming.
How to use split from regex in python and keep your split word? Question: Is there a way to use split function without losing the word or char, that you using to split with? for example: import re x = '''\ 1. abcde. 2. fgh 2.5 ijk. 3. lmnop ''' print(x) listByNum = re.split(r'\d\.\D', x) print(listByNum) [![the output](http://i.stack.imgur.com/5es14.png)](http://i.stack.imgur.com/5es14.png) I want to keep the digit in the list An other example: import re x = '''\ I love stackoverflow. I love food.\nblah blah blah. ''' print(x) listBySentences = re.split(r'\.', x) print(listBySentences) [![output for example 2](http://i.stack.imgur.com/jgjnJ.png)](http://i.stack.imgur.com/jgjnJ.png) Answer: Not very well documented, but you can use parentheses around the expression in question: import re x = '''\ 1. abcde. 2. fgh 2.5 ijk. 3. lmnop ''' print(x) listByNum = re.split(r'(\d\.\D)', x) print(listByNum) # ['', '1.\n', 'abcde.\n', '2.\n', 'fgh 2.5 ijk.\n', '3.\n', 'lmnop\n '] * * * To even _clean_ your data afterwards, you can use a list comprehension, like so: listByNum = [num.strip() for num in re.split(r'(\d\.\D)', x) if num] # ['1.', 'abcde.', '2.', 'fgh 2.5 ijk.', '3.', 'lmnop'] * * * To keep the digits within the splitted elements, you can use the newer [**regex**](https://pypi.python.org/pypi/regex) module which supports splitting on empty strings: import regex as re x = same string as above listByNum = [num.strip() for num in re.split(r'(?V1)(?=\d\.\D)', x) if num] # ['1.\nabcde.', '2.\nfgh 2.5 ijk.', '3.\nlmnop']
Can someone explain to me this Turing machine code? Question: I am newbie to Python so I don't really understand this. It's some kind of Turing machine that should write binary number, but I can't figure out what's going on after these rules from collections import defaultdict import operator # Binary counter # (Current state, Current symbol) : (New State, New Symbol, Move) rules = { (0, 1): (0, 1, 1), (0, 0): (0, 0, 1), (0, None): (1, None, -1), (1, 0): (0, 1, 1), (1, 1): (1, 0, -1), (1, None): (0, 1, 1), } # from here I don't really understand what's going on def tick(state=0, tape=defaultdict(lambda: None), position=0): state, tape[position], move = rules[(state, tape[position])] return state, tape, position + move system = () for i in range(255): system = tick(*system) if(system[2] == 0): print(map(operator.itemgetter(1), sorted(system[1].items()))) Answer: It is a state machine. At each tick a new state is computed based on the old state and the contents of tape at 'tape position' in this line: state, tape[position], move = rules[(state, tape[position])] This statement is a destructuring assignment. The righthand side of the assignment will give you an entry of rules, which is a tuple of three elements. These three elements will be assigend to state, tape [position] and move respectively. Another thing that might puzzle you is the line: system = tick(*system) especially the *. In this line the (processor clock) tick function is called with the contents of tuple 'system' unpacked into separate parameters. I hope this is clear enough, but the fact that you're interested in a Turing machine tells me that you've got something with computer programming... ;)
Tkinter image does not show Question: I've been trying to set up a GUI using python and the Tkinter package. I am having a problem where the image does not show. Here is my code. import Tkinter as tk from PIL import Image, ImageTk class Application(tk.Frame): def __init__(self, master=None): tk.Frame.__init__(self, master) self.grid() self.createWidgets() def createWidgets(self): self.image = Image.open("my_image.png") self.photo = ImageTk.PhotoImage(self.image) self.label = tk.Label(self, image=self.photo) self.label.image = self.photo # keep a reference! self.label.grid(row=0,column=1) app = Application() app.master.title("Sample application") app.mainloop() I've included the keep refernce line suggested by others however it does not seem to be working. I'm using OS X 10.10.4 and Python 2.7.12 :: Anaconda custom (x86_64) Thanks! Answer: I tested this on Miniconda 2 on Linux and your sample worked just fine. What do you mean by Python 2.7.12 :: Anaconda custom? Is it a self-built Python?
Python: Convert a text file multi-level JSON Question: I am writing a script in `python` to go recursively through each file, create a JSON object from the file that looks like this: target_id length eff_length est_counts tpm ENST00000619216.1 68 33.8839 2.83333 4.64528 ENST00000473358.1 712 428.88 0 0 ENST00000469289.1 535 306.32 0 0 ENST00000607096.1 138 69.943 0 0 ENST00000417324.1 1187 844.464 0 0 ENST00000461467.1 590 342.551 3.44007 0.557892 ENST00000335137.3 918 588.421 0 0 ENST00000466430.5 2748 2405.46 75.1098 1.73463 ENST00000495576.1 1319 976.464 11.1999 0.637186 This is my script: import glob import os import json # define datasets # Dataset name datasets = ['pnoc'] # open file in append mode f = open('mydict','a') # define a new object data={} # traverse through folders of datasets for d in datasets: samples = glob.glob(d + "/data" + "/*.tsv") for s in samples: # get the SampleName without extension and path fname = os.path.splitext(os.path.basename(s))[0] # split the basename to get sample name and norm method sname, keyword, norm = fname.partition('.') # determing Normalization method based on filename if norm == "abundance": norm = "kallisto" elif norm == "rsem_genes.results": norm = "rsem_genes" else: norm = "rsem_isoforms" # read each file with open(s) as samp: next(samp) for line in samp: sp = line.split('\t') data.setdefault(sname,[]).append({"ID": sp[0],"Expression": sp[4]}) json.dump(data, f) f.close() I want a JSON object on the following lines: # 20000 Sample names, 3 Normalization methods and 60000 IDs in each file. DatasetName1 { SampleName1 { Type { Normalization1 { { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } }, Normalization2 { { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } }, Normalization3 { { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } } } }, SampleName2 { Type { Normalization1 { { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } }, Normalization2 { { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } }, Normalization3 { { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } } } }, ... SampleName20000{ Type { Normalization1 { { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } }, Normalization2 { { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } }, Normalization3 { { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } } } } } So my question is - When converting a text file to JSON, how do I set the levels in my JSON output? Thanks! Answer: First, instead of setting the default value over and over, you should make use of [`defaultdict`](https://docs.python.org/2/library/collections.html#collections.defaultdict). Secondly, I think your proposed structure is off, and you should be using some arrays within (JSON-like structure): { DatasetName1: { SampleName1: { Type: { Normalization1: [ { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } ], Normalization2: [ { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } ], Normalization3: [ { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } ] } }, SampleName2: { Type: { Normalization1: [ { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } ], Normalization2: [ { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } ], Normalization3: [ { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } ] } }, ... SampleName20000: { Type: { Normalization1: [ { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } ], Normalization2: [ { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } ], Normalization3: [ { ID1: value, Expression: value }, { ID2: value, Expression: value }, ... { ID60000: value, Expression: value } ] } } }, { DatasetName2: { ... }, ... } So your resulting code (untested) should look like this (long as your norm method logic is correct): from glob import glob from os import path from json import dump from collections import defaultdict # define datasets, and result dict datasets, results = ['pnoc'], defaultdict(dict) # open file in append mode with open('mydict','a') as f: # traverse through folders of datasets for d in datasets: for s in glob(d + "/data" + "/*.tsv"): sample = {"Type": defaultdict(list)} # get the basename without extension and path fname = path.splitext(path.basename(s))[0] # split the basename to get sample name and norm method sname, keyword, norm = fname.partition('.') # determing norm method based on filename if norm == "abundance": norm = "kallisto" elif norm == "rsem_genes.results": norm = "rsem_genes" else: norm = "rsem_isoforms" # read each file with open(s) as samp: next(samp) # Skip first line of file # Loop through each line and extract the ID and TPM for (id, _, __, ___, tpm) in (line.split('\t') for line in samp): # Add this line to the list for respective normalization method sample['Type'][norm].append({"ID": id, "Expression": float(tpm)}) # Add sample to dataset results[d][sname] = sample dump(results, f) This will save the result in a JSON format.
Repetitve seach and replace on constantly changing file. Python Question: So I'm trying to have a script that will run every 3 seconds to read a text file, clean it up and write is as another text file. The original text file is always changing. It's from a music playout program, current song title. I went collecting bits of python code around the internet and personalizing them to my needs. Right now I have a script that will work perfectly,each command and the scheduling. But when the original file changes again, then the script gives an error. Any idea how this could be fixed? Traceback <most recent call last>: File "radio_script.py", line 29, in <module> executeSomthing<> File "radio_script.py", line 10, in executeSomething for line in intext: IoError: [Errno 9] Bad file descriptor I'm running the python script on Windows. So when the script is run, if a line contains any of the "delete_lin" words the whole line is deleted. While each of the "line replace" entries replace those words with nothing, just as they are supposed to. Here is my script. # -*- coding: utf-8 -*- delete_lin = ['VP', 'VH', 'VT', 'VB', 'VS', 'BG'] import time import os def executeSomething(): with open('current.txt', 'r') as intext, open('currentedit.txt', 'w') as outfile: for line in intext: if not any(delete_lin in line for delete_lin in delete_lin): line = line.replace('(email)', "") line = line.replace('_AO VIVO', "") line = line.replace('Ao Vivo', "") line = line.replace('AO VIVO', "") line = line.replace('(04)', "") line = line.replace('2016', "") line = line.replace('2015', "") line = line.replace('2014', "") line = line.replace('2013', "") outfile.write(line) outfile.flush() intext.flush() print 'Pause' time.sleep(3) while True: executeSomething() Answer: When the original file is modified, the file reference `intext` is no longer a valid file. While the path is the same, the actual file has been modified. Thus, you should call `intext.close()` at the end of the `for` loop. To make sure it is actually recreating the file reference make sure to only `sleep` outside the loop.
Python - Concatenate Python query Question: I'm using the python-google-places script but I'm facing a problem when integrating the code into a function called with two parameters: "MyLocation" and "MyType". "MyLocation" works well but "MyType" doesn't work. Code that works (without a variable for "MyType") # Configuration de l'encodage (a mettre dans tous les scripts) # encoding: utf-8 #################################### ### CLASS #################################### def ExtractGoogleHotspots(Location,MyType): from googleplaces import GooglePlaces, types, lang YOUR_API_KEY = 'XXXXXXX' google_places = GooglePlaces(YOUR_API_KEY) # You may prefer to use the text_search API, instead. query_result = google_places.nearby_search( location=Location, #Location could be a name or coordinates in english format : 4.04827,9.70428 keyword='', radius=1000,#Radius in meters (max = 50 000) types=[types.TYPE_PHARMACY] #Specify the Type ) for place in query_result.places: # Returned places from a query are place summaries. Name = place.name Name = Name.encode('utf-8')#Encodage en UTF-8 obligatoire pour éviter les erreurs : "UnicodeEncodeError: 'ascii' codec can't encode character..." Latitude = place.geo_location['lat'] Longitude = place.geo_location['lng'] # The following method has to make a further API call. place.get_details() Details = place.details Address = place.formatted_address Address = Address.encode('utf-8') Phone_International = place.international_phone_number Website = place.website Result = str(MyType) + ";" + Name + ";" + Address + ";" + str(Phone_International) + ";" + str(Latitude) + ";" + str(Longitude) + ";" + str(Website) print Result #################################### ### Script #################################### Location = "4.04827,9.70428" MyType = "DOES_NOT_WORK" ExtractGoogleHotspots(Location, MyType) Code that doesn't work: # Configuration de l'encodage (a mettre dans tous les scripts) # encoding: utf-8 #################################### ### CLASS #################################### def ExtractGoogleHotspots(Location,Type): from googleplaces import GooglePlaces, types, lang YOUR_API_KEY = 'XXXXXXX' google_places = GooglePlaces(YOUR_API_KEY) MyType = "[types.TYPE_"+Type+"]" # You may prefer to use the text_search API, instead. query_result = google_places.nearby_search( location=Location, #Location could be a name or coordinates in english format : 4.04827,9.70428 keyword='', radius=1000,#Radius in meters (max = 50 000) types=MyType #Specify the Type ) for place in query_result.places: # Returned places from a query are place summaries. Name = place.name Name = Name.encode('utf-8')#Encodage en UTF-8 obligatoire pour éviter les erreurs : "UnicodeEncodeError: 'ascii' codec can't encode character..." Latitude = place.geo_location['lat'] Longitude = place.geo_location['lng'] # The following method has to make a further API call. place.get_details() Details = place.details Address = place.formatted_address Address = Address.encode('utf-8') Phone_International = place.international_phone_number Website = place.website Result = str(MyType) + ";" + Name + ";" + Address + ";" + str(Phone_International) + ";" + str(Latitude) + ";" + str(Longitude) + ";" + str(Website) print Result #################################### ### Script #################################### Location = "4.04827,9.70428" MyType = "PHARMACY" ExtractGoogleHotspots(Location, MyType) How to resolve the problem to have a variable to define the Types part? Answer: To concatenate the type name with `TYPE_` in take it from `types` you can use `getattr` MyTypes = [ getattr(types, 'TYPE_' + Type.upper() ]
TypeError: argument of type 'file' is not iterable 2.7 python Question: I'm trying to create and write an existing dictionary to a csv file but I keep getting this error which I can't figure out: Traceback (most recent call last): File "/Users/Danny/Desktop/Inventaris.py", line 52, in <module> Toevoeging_methode_plus(toevoegproduct, aantaltoevoeging) File "/Users/Danny/Desktop/Inventaris.py", line 9, in Toevoeging_methode_plus writecolumbs() File "/Users/Danny/Desktop/Inventaris.py", line 21, in writecolumbs with open("variabelen.csv", "wb") in csvfile: TypeError: argument of type 'file' is not iterable >>> And this is the code that gives me the error: import csv import os def Toevoeging_methode_plus(key_to_find, definition): if key_to_find in a: current = a[key_to_find] newval = int(current) + aantaltoevoeging a[key_to_find] = int(current) + aantaltoevoeging writecolumbs() def Toevoeging_methode_minus(key_to_find, definition): if key_to_find in a: current = a[key_to_find] newval = int(current) - aantaltoevoeging a[key_to_find] = int(current) - aantaltoevoeging writecolumbs() def writecolumbs(): os.remove("variabelen.csv") with open("variabelen.csv", "wb") in csvfile: spamwriter = csv.writer(csvfile, delimiter=' ', quotechar='|', quoting=csv.QUOTE_MINIMAL) spamwriter.writerow("Product", "Aantal") spamwriter.writerows(a) a = {} with open("variabelen.csv") as csvfile: reader = csv.DictReader(csvfile) for row in reader: a[row['Product']] = row['Aantal'] print a print(""" 1. Toevoegen 2. Aftrek 3. Check """) askmenu = int(raw_input("Menu Nummer? ")) if askmenu is 1: toevoegproduct = raw_input("Productnummer > ") aantaltoevoeging = int(raw_input("Hoeveel > ")) Toevoeging_methode_plus(toevoegproduct, aantaltoevoeging) print a elif askmenu is 2: toevoegproduct = raw_input("Productnummer > ") aantaltoevoeging = int(raw_input("Hoeveel > ")) Toevoeging_methode_minus(toevoegproduct, aantaltoevoeging) writecolumbs(toevoegproduct, newval) print a elif askmenu is 3: checknummer = raw_input("Productnummer > ") if checknummer in a: print a[checknummer] else: print "oops" The csv file contains this information currently: Product Aantal 233 60 2234 1 Answer: Write `as csvfile` not `in csvfile` on line 21. The `in` keyword in python is followed by an iterable, that's why you're getting that error message.
Why am I getting this error from having nonvalid unicode in source (But only when importing matplotlib) Question: So it took me a while before I pinned the cause of this one but still am at loss off how it is possible. I have recently switched to python3 and I got this huge error when trying to import matplotlib: Traceback (most recent call last): File "C:/Users/y2kbugger/Desktop/test.py", line 6, in <module> File "C:\Anaconda2\envs\mypackage\lib\site-packages\matplotlib\__init__.py", lin e 124, in <module> from matplotlib.rcsetup import (defaultParams, File "C:\Anaconda2\envs\mypackage\lib\site-packages\matplotlib\rcsetup.py", line 30, in <module> from matplotlib.fontconfig_pattern import parse_fontconfig_pattern File "C:\Anaconda2\envs\mypackage\lib\site-packages\matplotlib\fontconfig_patter n.py", line 25, in <module> from pyparsing import Literal, ZeroOrMore, \ File "C:\Anaconda2\envs\mypackage\lib\site-packages\pyparsing.py", line 3539, in <module> _escapedPunc = Word( _bslash, r"\[]-*.$+^?()~ ", exact=2 ).setParseAction(la mbda s,l,t:t[0][1]) File "C:\Anaconda2\envs\mypackage\lib\site-packages\pyparsing.py", line 966, in setParseAction self.parseAction = list(map(_trim_arity, list(fns))) File "C:\Anaconda2\envs\mypackage\lib\site-packages\pyparsing.py", line 813, in _trim_arity this_line = extract_stack()[-1] File "C:\Anaconda2\envs\mypackage\lib\site-packages\pyparsing.py", line 797, in extract_stack frame_summary = traceback.extract_stack()[offset] File "C:\Anaconda2\envs\mypackage\lib\traceback.py", line 207, in extract_stack stack = StackSummary.extract(walk_stack(f), limit=limit) File "C:\Anaconda2\envs\mypackage\lib\traceback.py", line 358, in extract f.line File "C:\Anaconda2\envs\mypackage\lib\traceback.py", line 282, in line self._line = linecache.getline(self.filename, self.lineno).strip() File "C:\Anaconda2\envs\mypackage\lib\linecache.py", line 16, in getline lines = getlines(filename, module_globals) File "C:\Anaconda2\envs\mypackage\lib\linecache.py", line 47, in getlines return updatecache(filename, module_globals) File "C:\Anaconda2\envs\mypackage\lib\linecache.py", line 137, in updatecache lines = fp.readlines() File "C:\Anaconda2\envs\mypackage\lib\codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 308: invali d start byte Commenting out the `import matplotlib as mpl` causes the error not to occur. This led me astray trying different combinations of matplot, numpy, etc. The part that confuses me is that if I delete the comments (that I pasted from the web) the error is actually fixed. My editor is vim. I guess utf-8 was not the encoding that vim used to write the file. Minimal error producing example: # -*- coding: utf-8 -*- import matplotlib as mpl # Bad character pasted into vim from chrome: – To fix just remove the "EN DASH" (or the entire line 3) and the matplotlib imports correctly. So why does the invalid(?) unicode in the comments cause an error only when trying to `import matplotlib`(and before it even reaches the comment in question) python==3.5.2 colorama==0.3.7 comtypes==1.1.2 cycler==0.10.0 matplotlib==1.5.1 numpy==1.11.1 pandas==0.18.1 py==1.4.31 pyparsing==2.1.4 pytest==2.9.2 python-dateutil==2.5.3 pytz==2016.6.1 pywin32==220 scikit-learn==0.17.1 scipy==0.18.0 six==1.10.0 Answer: The problem is in [pyparsing](http://pythonhosted.org/pyparsing/): > The pyparsing module is an alternative approach to creating and executing > simple grammars, vs. the traditional lex/yacc approach, or the use of > regular expressions. With pyparsing, you don't need to learn a new syntax > for defining grammars or matching expressions - the parsing module provides > a library of classes that you use to construct the grammar directly in > Python. In order to "construct the grammar directly in Python", pyparsing needs to read the source file (in this case a matplotlib source file) where the grammar is defined. In what would usually just be a bit of harmless extra work, pyparsing is reading not just the matplotlib source file but everything in the stack at the point the grammar is defined, all the way down to the source file where you have your `import matplotlib`. When it reaches your source file it chokes, because your file indeed is not in UTF-8; 0x96 is the Windows-1252 (and/or Latin-1) encoding for the en dash. This issue (reading too much of the stack) has [already been fixed](https://sourceforge.net/p/pyparsing/code/415/) by [the author of pyparsing](http://stackoverflow.com/users/165216/paul- mcguire) so the fix should be in the next [release of pyparsing](https://pypi.python.org/pypi/pyparsing) (probably 2.1.8). By the way, matplotlib is defining a pyparsing grammar in order to be able to read fontconfig files, which are a way of configuring fonts used mainly on Linux. So on Windows pyparsing is probably not even required to use matplotlib!
Rename/Backup old directory in python Question: I have a script that creates a new directory regularly. I would like to check if it already exists and if so move the existing folder to a backup. My first iteration was if os.path.isdir(destination_path): os.rename(destination_path,destination_path + '_old') However, if there already one being backed up it will obviously crash. What I would like to do is find the number of directories that match the destination_path and append that number like a version number. if os.path.isdir(destination_path): n = get_num_folders_like(destination_path) os.rename(destination_path,destination_path + str(n)) I am just not sure how to make such a hypothetical function. I think fnmatch might work but I can't get the syntax right. Answer: If you need to move the old directory aside, renumbering can easily be done by listing all directories by the same name, then picking last one by extracting the numeric maximum from the matching names. Listing the directories can be done by using the [`glob` module](https://docs.python.org/2/library/glob.html#glob.glob); it combines listing files with the `fnmatch` module to filter: import glob if os.path.isdir(destination_path): # match all paths starting with the destination name, plus at least # an underscore and one digit. backups = glob.glob(destination_path + '_[0_9]*') def extract_number(path): try: # assume everything after `_` is a number return int(path.rpartition('_')[-1]) except ValueError: # not everything was a number, skip this directory return None backup_numbers = (extract__number(b) for b in backups) try: next_backup = max(filter(None, backup_numbers)) + 1 except ValueError: # no backup directories next_backup = 1 os.rename(destination_path,destination_path + '_{:d}'.format(next_backup)) I'm assuming you are not worried about race conditions here.
Program to transform a string in hexadecimal? Question: #!/usr/bin/python3 # -*- coding: utf-8 -*- import os import sys try: string=sys.argv[1] cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'" os.system(cmd) except IndexError: print("\nInforme a string!\n") I found this code on Internet. I tried hard to understand what it does. Could someone explain? string=sys.argv[1] cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'" The two lines above are like magic to me. Answer: For the line: cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'" * `echo` sends the text to the standard output, * `|` pipes that output to `xxd`, which translates binary to hexadecimal (think hex editors); the `-ps` flag, according to the `xxd` man page: > > -p | -ps | -postscript | -plain > output in postscript continuous hexdump style. Also known > as > plain hexdump style. > * `sed` is the stream editor command - there are literally books on this. Basically here, the piped hexdecimal output from the `xxd -ps` command has this replacement regex performed, broken down here: sed 's/ # Start find [[:xdigit]]\{2\} # Match two or more hexadecimal characters # ([[:xdigit:]] is POSIX-compliant representation # of hexadecimal character) / # End find, start replace \\\\x& # Lots of escaping backslashes - as \x&; the ampersand # becomes the entire previous match (the 2 hexadecimal # characters), e.g. '\x8e' /g' # End find, and g means all matches are changed > \xxx Produces or matches a character whose hexadecimal ascii value is xx. > ([source](https://www.gnu.org/software/sed/manual/html_node/Escapes.html)) * In a nutshell, the scripts takes an input and translates it to hexadecimal, and then the `sed` command converts them to ascii characters for each double of hexadecimal code characters from the `xxd` input.
How to read every log line to match a regex pattern using spark? Question: The following program throws an error from pyparsing import Regex, re from pyspark import SparkContext sc = SparkContext("local","hospital") LOG_PATTERN ='(?P<Case_ID>[^ ;]+);(?P<Event_ID>[^ ;]+);(?P<Date_Time>[^ ;]+);(?P<Activity>[^;]+);(?P<Resource>[^ ;]+);(?P<Costs>[^ ;]+)' logLine=sc.textFile("C:\TestLogs\Hospital.log").cache() #logLine='1;35654423;30-12-2010:11.02;register request;Pete;50' for line in logLine.readlines(): match = re.search(LOG_PATTERN,logLine) Case_ID = match.group(1) Event_ID = match.group(2) Date_Time = match.group(3) Activity = match.group(4) Resource = match.group(5) Costs = match.group(6) print Case_ID print Event_ID print Date_Time print Activity print Resource print Costs Error: > Traceback (most recent call last): File "C:/Spark/spark-1.6.1-bin- > hadoop2.4/bin/hospital2.py", line 7, in for line in logLine.readlines(): > AttributeError: 'RDD' object has no attribute 'readlines' If i add the `open` function to read the file then i get the following error: > Traceback (most recent call last): File "C:/Spark/spark-1.6.1-bin- > hadoop2.4/bin/hospital2.py", line 7, in f = open(logLine,"r") TypeError: > coercing to Unicode: need string or buffer, RDD found Can't seem to figure out how to read line by line and extract words that match the pattern. Also if i pass only a single logline `logLine='1;35654423;30-12-2010:11.02;register request;Pete;50'` it works. I'm new to spark and know only basics in python. Please help. Answer: You are mixing things up. The line logLine=sc.textFile("C:\TestLogs\Hospital.log") creates an RDD, and RDDs do not have a readlines() method. See the RDD API here: <http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD> You can use collect() to retrieve the content of the RDD line by line. readlines() is part of the standard Python file API, but you do not usually need it when working with files in Spark. You simply load the file with textFile() and then process it with RDD API, see the link above.
Possible Bug when using diff on datetime objects in a pandas groupby object with multiindex Question: Consider a dataframe with a column with labels that are used to create groups and two rows of the same dates: import datetime as dt import pandas as pd dd = [['A','A','A','A','B','B']\ ,[dt.date(1981,3,6),dt.date(1986,5,1),dt.date(1983,11,8)\ ,dt.date(1982,6,11),dt.date(1977,2,26),dt.date(1991,9,4)]] dd = map(list,zip(*dd)) DF = pd.DataFrame(dd,columns=['Label','Date']) DF['Date2'] = DF['Date'].copy() print DF print type(DF.Date[0]) print type(DF.Date2[0]) This yields: Label Date Date2 0 A 1981-03-06 1981-03-06 1 A 1986-05-01 1986-05-01 2 A 1983-11-08 1983-11-08 3 A 1982-06-11 1982-06-11 4 B 1977-02-26 1977-02-26 5 B 1991-09-04 1991-09-04 <type 'datetime.date'> <type 'datetime.date'> Now I can do these: print DF.groupby(['Label']).diff() print "======================================" print DF.groupby(['Label']).apply(lambda s: s[u'Date'].diff()) print "======================================" print DF.groupby(['Label']).apply(lambda s: s[u'Date2'].diff()) Leading to this output: Date Date2 0 NaN NaN 1 1882 days, 0:00:00 1882 days, 0:00:00 2 -905 days, 0:00:00 -905 days, 0:00:00 3 -515 days, 0:00:00 -515 days, 0:00:00 4 NaN NaN 5 5303 days, 0:00:00 5303 days, 0:00:00 ====================================== Label A 0 NaT 1 1882 days 2 -905 days 3 -515 days B 4 NaT 5 5303 days Name: Date, dtype: timedelta64[ns] ====================================== Label A 0 NaT 1 1882 days 2 -905 days 3 -515 days B 4 NaT 5 5303 days Name: Date2, dtype: timedelta64[ns] However when I'm doing these: print DF.groupby(['Label','Date']).diff() print "======================================" print DF.groupby(['Label','Date']).apply(lambda s: s[u'Date2'].diff()) print "======================================" print DF.groupby(['Label','Date'])[u'Date2'].transform(pd.Series.diff) Then the output is broken: Date2 0 NaN 1 NaN 2 NaN 3 NaN 4 NaN 5 NaN ====================================== Label Date A 1981-03-06 0 NaN 1982-06-11 3 NaN 1983-11-08 2 NaN 1986-05-01 1 NaN B 1977-02-26 4 NaN 1991-09-04 5 NaN Name: Date2, dtype: object ====================================== 0 NaN 1 NaN 2 NaN 3 NaN 4 NaN 5 NaN Name: Date2, dtype: object As you can see for some reason the Date2 column is no longer a timedelta64 dtype but just an object type. This happens with every method I have tried and also when switching the two date columns, so this must be something to do with the multiindex groupby. I can't tell if this is expected or unexpected behaviour, that is if it's a bug or not. EDIT: Pandas 0.18.1 on Python 2.7.12 EDIT2: Deleted, my mistake. Answer: I see two problems - first you need dtypes `datetimes` and then your sample data where output is `NaT` (len of each group was `1`, so `difference` is `NaT`): import datetime as dt import pandas as pd dd = [['A','A','A','A','B','B']\ ,[dt.date(1981,3,6),dt.date(1986,5,1),dt.date(1983,11,8)\ ,dt.date(1982,6,11),dt.date(1977,2,26),dt.date(1991,9,4)]] dd = list(zip(*dd)) DF = pd.DataFrame(dd,columns=['Label','Date']) DF['Date2'] = DF['Date'].copy() print (DF) Label Date Date2 0 A 1981-03-06 1981-03-06 1 A 1986-05-01 1986-05-01 2 A 1983-11-08 1983-11-08 3 A 1982-06-11 1982-06-11 4 B 1977-02-26 1977-02-26 5 B 1991-09-04 1991-09-04 print (DF.dtypes) Label object Date object Date2 object dtype: object DF['Date'] = pd.to_datetime(DF['Date']) DF['Date2'] = pd.to_datetime(DF['Date2']) print (DF.dtypes) Label object Date datetime64[ns] Date2 datetime64[ns] dtype: object print (DF.groupby(['Label','Date'])['Date2'].diff()) 0 NaT 1 NaT 2 NaT 3 NaT 4 NaT 5 NaT Name: Date2, dtype: timedelta64[ns] I changed data in `Date2`: import datetime as dt import pandas as pd dd = [['A','A','A','A','B','B']\ ,[dt.date(1981,3,6),dt.date(1981,3,6),dt.date(1983,11,8)\ ,dt.date(1983,11,8),dt.date(1977,2,26),dt.date(1991,9,4)]\ ,[dt.date(1981,3,6),dt.date(1986,5,1),dt.date(1983,11,8)\ ,dt.date(1982,6,11),dt.date(1977,2,26),dt.date(1991,9,4)]] dd = list(zip(*dd)) DF = pd.DataFrame(dd,columns=['Label','Date', 'Date2']) DF['Date'] = pd.to_datetime(DF['Date']) DF['Date2'] = pd.to_datetime(DF['Date2']) print (DF) Label Date Date2 0 A 1981-03-06 1981-03-06 1 A 1981-03-06 1986-05-01 2 A 1983-11-08 1983-11-08 3 A 1983-11-08 1982-06-11 4 B 1977-02-26 1977-02-26 5 B 1991-09-04 1991-09-04 print (DF.dtypes) Label object Date datetime64[ns] Date2 datetime64[ns] dtype: object print (DF.groupby(['Label','Date'])['Date2'].diff()) 0 NaT 1 1882 days 2 NaT 3 -515 days 4 NaT 5 NaT Name: Date2, dtype: timedelta64[ns] print (DF.groupby(['Label','Date']).diff()) Date2 0 NaT 1 1882 days 2 NaT 3 -515 days 4 NaT 5 NaT Label Date print (DF.groupby(['Label','Date']).apply(lambda s: s[u'Date2'].diff())) A 1981-03-06 0 NaT 1 1882 days 1983-11-08 2 NaT 3 -515 days B 1977-02-26 4 NaT 1991-09-04 5 NaT Name: Date2, dtype: timedelta64[ns] print (DF.groupby(['Label','Date'])[u'Date2'].transform(pd.Series.diff)) 0 NaT 1 1975-02-26 2 NaT 3 1968-08-04 4 NaT 5 NaT Name: Date2, dtype: datetime64[ns] If remove converting [`to_datetime`](http://pandas.pydata.org/pandas- docs/stable/generated/pandas.to_datetime.html), output is `NaN` and with groups with numbers `NaT`: import datetime as dt import pandas as pd dd = [['A','A','A','A','B','B']\ ,[dt.date(1981,3,6),dt.date(1981,3,6),dt.date(1983,11,8)\ ,dt.date(1983,11,8),dt.date(1977,2,26),dt.date(1991,9,4)]\ ,[dt.date(1981,3,6),dt.date(1986,5,1),dt.date(1983,11,8)\ ,dt.date(1982,6,11),dt.date(1977,2,26),dt.date(1991,9,4)]] dd = list(zip(*dd)) DF = pd.DataFrame(dd,columns=['Label','Date', 'Date2']) print (DF) Label Date Date2 0 A 1981-03-06 1981-03-06 1 A 1981-03-06 1986-05-01 2 A 1983-11-08 1983-11-08 3 A 1983-11-08 1982-06-11 4 B 1977-02-26 1977-02-26 5 B 1991-09-04 1991-09-04 print (DF.dtypes) Label object Date object Date2 object dtype: object print (DF.groupby(['Label','Date'])['Date2'].diff()) 0 NaT 1 1882 days, 0:00:00 2 NaT 3 -515 days, 0:00:00 4 NaN 5 NaN Name: Date2, dtype: object print (DF.groupby(['Label','Date']).diff()) Date2 0 NaN 1 1882 days, 0:00:00 2 NaN 3 -515 days, 0:00:00 4 NaN 5 NaN print (DF.groupby(['Label','Date']).apply(lambda s: s[u'Date2'].diff())) Label Date A 1981-03-06 0 NaT 1 1882 days, 0:00:00 1983-11-08 2 NaT 3 -515 days, 0:00:00 B 1977-02-26 4 NaN 1991-09-04 5 NaN Name: Date2, dtype: object print (DF.groupby(['Label','Date'])[u'Date2'].transform(pd.Series.diff)) 0 None 1 162604800000000000 2 None 3 -44496000000000000 4 NaN 5 NaN Name: Date2, dtype: object EDIT: If length of group is `1` and it means has one row, then [`diff`](http://pandas.pydata.org/pandas- docs/stable/generated/pandas.Series.diff.html) return `NaT`: import pandas as pd import numpy as np import io import datetime as dt import pandas as pd dd = [['A','A','A','A','B','B']\ ,[dt.date(1981,3,6),dt.date(1981,3,6),dt.date(1983,11,8)\ ,dt.date(1983,11,8),dt.date(1977,2,26),dt.date(1991,9,4)]\ ,[dt.date(1981,3,6),dt.date(1986,5,1),dt.date(1983,11,8)\ ,dt.date(1982,6,11),dt.date(1977,2,26),dt.date(1991,9,4)]] dd = list(zip(*dd)) DF = pd.DataFrame(dd,columns=['Label','Date', 'Date2']) DF['Date'] = pd.to_datetime(DF['Date']) DF['Date2'] = pd.to_datetime(DF['Date2']) print (DF) Label Date Date2 0 A 1981-03-06 1981-03-06 1 A 1981-03-06 1986-05-01 2 A 1983-11-08 1983-11-08 3 A 1983-11-08 1982-06-11 4 B 1977-02-26 1977-02-26 5 B 1991-09-04 1991-09-04 for i, g in DF.groupby(['Label','Date']): print (g) print ('diff: ') print (g[['Date', 'Date2']].diff()) print ('------------') 0 A 1981-03-06 1981-03-06 1 A 1981-03-06 1986-05-01 diff: Date Date2 0 NaT NaT 1 0 days 1882 days ------------ Label Date Date2 2 A 1983-11-08 1983-11-08 3 A 1983-11-08 1982-06-11 diff: Date Date2 2 NaT NaT 3 0 days -515 days ------------ Label Date Date2 4 B 1977-02-26 1977-02-26 diff: Date Date2 4 NaT NaT ------------ Label Date Date2 5 B 1991-09-04 1991-09-04 diff: Date Date2 5 NaT NaT ------------ print ('*************************') for i, g in DF.groupby(['Label','Date2']): print (g) print ('diff2: ') print (g[['Date', 'Date2']].diff()) print ('------------') Label Date Date2 0 A 1981-03-06 1981-03-06 diff2: Date Date2 0 NaT NaT ------------ Label Date Date2 3 A 1983-11-08 1982-06-11 diff2: Date Date2 3 NaT NaT ------------ Label Date Date2 2 A 1983-11-08 1983-11-08 diff2: Date Date2 2 NaT NaT ------------ Label Date Date2 1 A 1981-03-06 1986-05-01 diff2: Date Date2 1 NaT NaT ------------ Label Date Date2 4 B 1977-02-26 1977-02-26 diff2: Date Date2 4 NaT NaT ------------ Label Date Date2 5 B 1991-09-04 1991-09-04 diff2: Date Date2 5 NaT NaT ------------
Feature Selection Question: I tried to do recursive feature selection in scikit learn with following code. from sklearn import datasets, svm from sklearn.feature_selection import SelectKBest, f_classif from sklearn.feature_selection import RFE import numpy as np input_file_iris = "/home/anuradha/Project/NSL_KDD_master/Modified/iris.csv" dataset = np.loadtxt(input_file_iris, delimiter=",") X = dataset[:,0:4] y = dataset[:,4] estimator= svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.1) selector = RFE(estimator,3, step=1) selector = selector.fit(X,y) But it gives following error Traceback (most recent call last): File "/home/anuradha/PycharmProjects/LearnPython/Scikit-learn/univariate.py", line 30, in <module> File "/usr/local/lib/python2.7/dist-packages/sklearn/feature_selection/rfe.py", line 131, in fit return self._fit(X, y) File "/usr/local/lib/python2.7/dist-packages/sklearn/feature_selection/rfe.py", line 182, in _fit raise RuntimeError('The classifier does not expose ' RuntimeError: The classifier does not expose "coef_" or "feature_importances_" attributes Please some one can help me to solve this or guide me to another solution Answer: Change your kernel to **linear** and your code would work. Besides, `svm.OneClassSVM` is used for unsupervised outlier detection. Are you sure that you want to use it as estimator? Or perhaps you want to use `svm.SVC()`. Look the following link for documentation. <http://scikit- learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html> Lastly, iris data set is already available in sklearn. You have imported the `sklearn.datasets`. So you can simply load iris as: iris = datasets.load_iris() X = iris.data y = iris.target
Rounding up a column Question: I am new to pandas python and I am having difficulties trying to round up all the values in the column. For example, Example 88.9 88.1 90.2 45.1 I tried using my current code below, but it gave me: > AttributeError: 'str' object has no attribute 'rint' df.Example = df.Example.round() Answer: You can use [`numpy.ceil`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ceil.html#numpy.ceil): In [80]: import numpy as np In [81]: np.ceil(df.Example) Out[81]: 0 89.0 1 89.0 2 91.0 3 46.0 Name: Example, dtype: float64 depending on what you like, you could also change the type: In [82]: np.ceil(df.Example).astype(int) Out[82]: 0 89 1 89 2 91 3 46 Name: Example, dtype: int64 * * * **Edit** Your error message indicates you're trying just to round (not necessarily up), but are having a type problem. You can solve it like so: In [84]: df.Example.astype(float).round() Out[84]: 0 89.0 1 88.0 2 90.0 3 45.0 Name: Example, dtype: float64 Here, too, you can cast at the end to an integer type: In [85]: df.Example.astype(float).round().astype(int) Out[85]: 0 89 1 88 2 90 3 45 Name: Example, dtype: int64
Web scraping with lxml and requests Question: I have a [web page](http://www.booking.com/searchresults.et.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaEKIAQGYAQu4AQbIAQzYAQHoAQH4AQuoAgM&sid=1bc09296ee139ec3cb0edce87d7fb20a&dcid=1&class_interval=1&dest_id=67&dest_type=country&dtdisc=0&group_adults=2&group_children=0&hlrd=0&hyb_red=0&inac=0&label_click=undef&nha_red=0&no_rooms=1&postcard=0&redirected_from_city=0&redirected_from_landmark=0&redirected_from_region=0&review_score_group=empty&room1=A%2CA&sb_price_type=total&score_min=0&src=index&ss=Eesti&ss_all=0&ss_raw=Eesti&ssb=empty&sshis=0&traveller=other&nflt=ht_id%3D204%3B&lsf=ht_id%7C204%7C221&unchecked_filter=hoteltype) with hotels, where i want to get all the hotel names. I made a code following instructions from [this page](http://docs.python- guide.org/en/latest/scenarios/scrape/), but no success. My code is here: from lxml import html import requests page = requests.get('web page url') tree = html.fromstring(page.content) hotel_name = tree.xpath('//span[@title="sr-hotel__name"]/text()') print(hotel_name) All I get is an empty list. Any help? Answer: You need to add a user-agent: from lxml import html import requests headers= {"User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36"} page = requests.get('http://www.booking.com/searchresults.et.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaEKIAQGYAQu4AQbIAQzYAQHoAQH4AQuoAgM&sid=1bc09296ee139ec3cb0edce87d7fb20a&dcid=1&class_interval=1&dest_id=67&dest_type=country&dtdisc=0&group_adults=2&group_children=0&hlrd=0&hyb_red=0&inac=0&label_click=undef&nha_red=0&no_rooms=1&postcard=0&redirected_from_city=0&redirected_from_landmark=0&redirected_from_region=0&review_score_group=empty&room1=A%2CA&sb_price_type=total&score_min=0&src=index&ss=Eesti&ss_all=0&ss_raw=Eesti&ssb=empty&sshis=0&traveller=other&nflt=ht_id%3D204%3B&lsf=ht_id%7C204%7C221&unchecked_filter=hoteltype' , headers=headers) tree = html.fromstring(page.content) print(page.text) hotel_name = tree.xpath('//span[@class="sr-hotel__name"]/text()') print(hotel_name) Which will give you: ['\nHotel Telegraaf\n', '\nRadisson Blu Hotel Olümpia\n', '\nRadisson Blu Sky Hotel\n', '\nPark Inn by Radisson Central Tallinn\n', '\nPark Inn by Radisson Meriton Conference & Spa Hotel Tallinn\n', '\nMerchants House Hotel\n', '\nSwissotel Tallinn\n', '\nMy City Hotel\n', '\nNordic Hotel Forum\n', '\nHotel Palace by TallinnHotels\n', '\nHotel Ülemiste\n', '\nTallink City Hotel\n', '\nHotel London by Tartuhotels\n', '\nJohan Design & SPA Hotel\n', '\nThe von Stackelberg Hotel Tallinn\n'] But you should read their [TOS](http://www.booking.com/content/terms.et.html?label=gen173nr-1DCAEoggJCAlhYSDNiBW5vcmVmaGmIAQGYAQu4AQrIAQzYAQPoAQGoAgM;sid=52cedf70cb08d87dee23f69f173a8650;dcid=12): _Our services are only for personal and non-commercial use. Consequently, you will not be allowed on our web site available through the content, information, software, products or services of a commercial or compete for the purpose of reselling link (deep link) to use, copy, monitor (eg, spiders, scrape), display, download download or reproduce._
using Plotly with pycharm Question: I am trying to use **Plotly** with **pycharm** when I run the code (which I toke from the Ploty getting started [page](https://plot.ly/python/getting- started/#initialization-for-offline-plotting) on terminal it is OK but when I use it with pycharm I get error: "ImportError: No module named 'plotly.graph_objs'; 'plotly' is not a packageé" [Code on pycharm](http://i.stack.imgur.com/Fo5Pb.png) Any idea about where could be the problem ? because plotly module works on terminal and not on the editor could this problem be related to what is evoqued in this question ? > Blockquote [Python dynamic objects not defined anywhere? (Plotly > lib)](http://stackoverflow.com/questions/38021683/python-dynamic-objects- > not-defined-anywhere-plotly-lib) Thx Answer: Your script is named `plotly.py`. In the first line where you import plotly it loads your script and not the plotly package. Change the name of your script to `plot.py` or something other than plotly and it should work.
Process request thread error with Flask Application? Question: This might be a long shot, but here's the error that i'm getting: File "/home/MY NAME/anaconda/lib/python2.7/SocketServer.py", line 596, in process_request_thread self.finish_request(request, client_address) File "/home/MY NAME/anaconda/lib/python2.7/SocketServer.py", line 331, in finish_request self.RequestHandlerClass(request, client_address, self) File "/home/MY NAME/anaconda/lib/python2.7/SocketServer.py", line 654, in __init__ self.finish() File "/home/MY NAME/anaconda/lib/python2.7/SocketServer.py", line 713, in finish self.wfile.close() File "/home/MY NAME/anaconda/lib/python2.7/socket.py", line 283, in close self.flush() File "/home/MY NAME/anaconda/lib/python2.7/socket.py", line 307, in flush self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe I have built a `Flask` application that takes addresses as input and performs some string formatting, manipulation, etc, then sends them to `Bing Maps` to geocode (through the `geopy` external module). I'm using this application to clean very large data sets. The application works for inputs of usually ~1,500 addresses (inputted 1 per line). By that I mean that it will process the address and send it to `Bing Maps` to be geocoded and then returned. After around 1,500 addresses, the application becomes unresponsive. If this happens while i'm at work, my proxy tells me that there is a `tcp error`. If i'm on a non work computer it just doesn't load the page. If I restart the application then it functions perfectly fine. Because of this i'm forced to run my program with batches of about 1,000 addresses (just to be safe because i'm not sure yet of the exact number that the program crashes at). Does anyone have any idea what might be causing it? I was thinking something along the lines of me hitting my Bing API key limit for the day (which is 30,000), but that can't be accurate as I rarely use more than 15,000 requests per day. My second thought was that maybe it's because i'm still using the standard flask server to run my application. Would switching to `gunicorn` or `uWSGI` solve this? My third thought was maybe it was getting overloaded with the amount of requests. I tried to sleep the program for 15 seconds or so after the first 1,000 addresses but that didn't solve anything. If anyone needs further clarification please let me know. Here is my code for the backend of the Flask Application. I'm getting the input from this function: @app.route("/clean", methods=['POST']) def dothing(): addresses = request.form['addresses'] return cleanAddress(addresses) Here is the `cleanAddress` function: It's a bit cluttered right now, with all of the if statements to check for specific typos in the address, but I plan on moving a lot of this code into other functions in another file and just passing the address though those functions to clean it up a bit. def cleanAddress(addresses): counter = 0 # nested helper function to fix addresses such as '30 w 60th' def check_st(address): if 'broadway' in address: return address has_th_st_nd_rd = re.compile(r'(?P<number>[\d]{1,4}(th|st|nd|rd)\s)(?P<following>.*)') has_number = has_th_st_nd_rd.search(address) if has_number is not None: if re.match(r'(street|st|floor)', has_number.group('following')): return address else: new_address = re.sub('(?P<number>[\d]{1,4}(st|nd|rd|th)\s)', r'\g<number>street ', address, 1) return new_address else: return address addresses = addresses.split('\n') cleaned = [] success = 0 fail = 0 cleaned.append('<body bgcolor="#FACC2E"><center><img src="http://goglobal.dhl-usa.com/common/img/dhl-express-logo.png" alt="Smiley face" height="100" width="350"><br><p>') cleaned.append('<br><h3>Note: Everything before the first comma is the Old Address. Everything after the first comma is the New Address</h13>') cleaned.append('<p><h3>To format the output in Excel, split the columns using "," as the delimiter. </p></h3>') cleaned.append('<p><h2><font color="red">Old Address </font> <font color="black">New Address </font></p></h2>') for address in addresses: dirty = address.strip() if ',' in address: dirty = dirty.replace(',', '') cleaned.append('<font color="red">' + dirty + ', ' + '</font>') address = address.lower() address = re.sub('[^A-Za-z0-9#]+', ' ', address).lstrip() pattern = r"\d+.* +(\d+ .*(" + "|".join(patterns) + "))" address = re.sub(pattern, "\\1", address) address = check_st(address) if 'one ' in address: address = address.replace('one', '1') if 'two' in address: address = address.replace('two', '2') if 'three' in address: address = address.replace('three', '3') if 'four' in address: address = address.replace('four', '4') if 'five' in address: address = address.replace('five', '5') if 'eight' in address: address = address.replace('eight', '8') if 'nine' in address: address = address.replace('nine', '9') if 'fith' in address: address = address.replace('fith', 'fifth') if 'aveneu' in address: address = address.replace('aveneu', 'avenue') if 'united states of america' in address: address = address.replace('united states of america', '') if 'ave americas' in address: address = address.replace('ave americas', 'avenue of the americas') if 'americas avenue' in address: address = address.replace('americas avenue', 'avenue of the americas') if 'avenue of americas' in address: address = address.replace('avenue of americas', 'avenue of the americas') if 'avenue of america ' in address: address = address.replace('avenue of america ', 'avenue of the americas ') if 'ave of the americ' in address: address = address.replace('ave of the americ', 'avenue of the americas') if 'avenue america' in address: address = address.replace('avenue america', 'avenue of the americas') if 'americaz' in address: address = address.replace('americaz', 'americas') if 'ave of america' in address: address = address.replace('ave of america', 'avenue of the americas') if 'amrica' in address: address = address.replace('amrica', 'americas') if 'americans' in address: address = address.replace('americans', 'americas') if 'walk street' in address: address = address.replace('walk street', 'wall street') if 'northend' in address: address = address.replace('northend', 'north end') if 'inth' in address: address = address.replace('inth', 'ninth') if 'aprk' in address: address = address.replace('aprk', 'park') if 'eleven' in address: address = address.replace('eleven', '11') if ' av ' in address: address = address.replace(' av ', ' avenue') if 'avnue' in address: address = address.replace('avnue', 'avenue') if 'ofthe americas' in address: address = address.replace('ofthe americas', 'of the americas') if 'aj the' in address: address = address.replace('aj the', 'of the') if 'fifht' in address: address = address.replace('fifht', 'fifth') if 'w46' in address: address = address.replace('w46', 'w 46') if 'w42' in address: address = address.replace('w42', 'w 42') if '95st' in address: address = address.replace('95st', '95th st') if 'e61 st' in address: address = address.replace('e61 st', 'e 61st') if 'driver information' in address: address = address.replace('driver information', '') if 'e87' in address: address = address.replace('e87', 'e 87') if 'thrd avenus' in address: address = address.replace('thrd avenus', 'third avenue') if '3r ' in address: address = address.replace('3r ', '3rd ') if 'st ates' in address: address = address.replace('st ates', '') if 'east52nd' in address: address = address.replace('east52nd', 'east 52nd') if 'authority to leave' in address: address = address.replace('authority to leave', '') if 'sreet' in address: address = address.replace('sreet', 'street') if 'w47' in address: address = address.replace('w47', 'w 47') if 'signature required' in address: address = address.replace('signature required', '') if 'direct' in address: address = address.replace('direct', '') if 'streetapr' in address: address = address.replace('streetapr', 'street') if 'steet' in address: address = address.replace('steet', 'street') if 'w39' in address: address = address.replace('w39', 'w 39') if 'ave of new york' in address: address = address.replace('ave of new york', 'avenue of the americas') if 'avenue of new york' in address: address = address.replace('avenue of new york', 'avenue of the americas') if 'brodway' in address: address = address.replace('brodway', 'broadway') if 'w 31 ' in address: address = address.replace('w 31 ', 'w 31th ') if 'w 34 ' in address: address = address.replace('w 34 ', 'w 34th ') if 'w38' in address: address = address.replace('w38', 'w 38') if 'broadeay' in address: address = address.replace('broadeay', 'broadway') if 'w37' in address: address = address.replace('w37', 'w 37') if '35street' in address: address = address.replace('35street', '35th street') if 'eighth avenue' in address: address = address.replace('eighth avenue', '8th avenue') if 'west 33' in address: address = address.replace('west 33', 'west 33rd') if '34t ' in address: address = address.replace('34t ', '34th ') if 'street ave' in address: address = address.replace('street ave', 'ave') if 'avenue of york' in address: address = address.replace('avenue of york', 'avenue of the americas') if 'avenue aj new york' in address: address = address.replace('avenue aj new york', 'avenue of the americas') if 'avenue ofthe new york' in address: address = address.replace('avenue ofthe new york', 'avenue of the americas') if 'e4' in address: address = address.replace('e4', 'e 4') if 'avenue of nueva york' in address: address = address.replace('avenue of nueva york', 'avenue of the americas') if 'avenue of new york' in address: address = address.replace('avenue of new york', 'avenue of the americas') if 'west end new york' in address: address = address.replace('west end new york', 'west end avenue') #print address address = address.split(' ') for pattern in patterns: try: if address[0].isdigit(): continue else: location = address.index(pattern) + 1 number_location = address[location] #print address[location] #if 'th' in address[location + 1] or 'floor' in address[location + 1] or '#' in address[location]: # continue except (ValueError, IndexError): continue if number_location.isdigit() and len(number_location) <= 4: address = [number_location] + address[:location] + address[location+1:] break address = ' '.join(address) if '#' in address: address = address.replace('#', '') #print (address) i = 0 for char in address: if char.isdigit(): address = address[i:] break i += 1 #print (address) if 'plz' in address: address = address.replace('plz', 'plaza ', 1) if 'hstreet' in address: address = address.replace('hstreet', 'h street') if 'dstreet' in address: address = address.replace('dstreet', 'd street') if 'hst' in address: address = address.replace('hst', 'h st') if 'dst' in address: address = address.replace('dst', 'd st') if 'have' in address: address = address.replace('have', 'h ave') if 'dave' in address: address = address.replace('dave', 'd ave') if 'havenue' in address: address = address.replace('havenue', 'h avenue') if 'davenue' in address: address = address.replace('davenue', 'd avenue') #print address regex = r'(.*)(' + '|'.join(patterns) + r')(.*)' address = re.sub(regex, r'\1\2', address).lstrip() + " nyc" print (address) if 'americasas st' in address: address = address.replace('americasas st', 'americas') try: clean = geolocator.geocode(address) x = clean.address address, city, zipcode, country = x.split(",") address = address.lower() if 'first' in address: address = address.replace('first', '1st') if 'second' in address: address = address.replace('second', '2nd') if 'third' in address: address = address.replace('third', '3rd') if 'fourth' in address: address = address.replace('fourth', '4th') if 'fifth' in address: address = address.replace('fifth', '5th') if ' sixth a' in address: address = address.replace('ave', '') address = address.replace('avenue', '') address = address.replace(' sixth', ' avenue of the americas') if ' 6th a' in address: address = address.replace('ave', '') address = address.replace('avenue', '') address = address.replace(' 6th', ' avenue of the americas') if 'seventh' in address: address = address.replace('seventh', '7th') if 'fashion' in address: address = address.replace('fashion', '7th') if 'eighth' in address: address = address.replace('eighth', '8th') if 'ninth' in address: address = address.replace('ninth', '9th') if 'tenth' in address: address = address.replace('tenth', '10th') if 'eleventh' in address: address = address.replace('eleventh', '11th') zipcode = zipcode[3:] to_write = str(address) + ", " + str(zipcode.lstrip()) + ", " + str(clean.latitude) + ", " + str(clean.longitude) to_find = str(address) #print to_write # returns 'can not be cleaned' if street address has no numbers if any(i.isdigit() for i in str(address)): with open('/home/MY NAME/Address_Database.txt', 'a+') as database: if to_find not in database.read(): database.write(dirty + '|' + to_write + '\n') if 'ncy rd' in address: cleaned.append('<font color="red"> Can not be cleaned </font> <br>') fail += 1 elif 'nye rd' in address: cleaned.append('<font color="red"> Can not be cleaned </font> <br>') fail += 1 elif 'nye c' in address: cleaned.append('<font color="red"> Can not be cleaned </font> <br>') fail += 1 else: cleaned.append(to_write + '<br>') success += 1 else: cleaned.append('<font color="red"> Can not be cleaned </font> <br>') fail += 1 except AttributeError: cleaned.append('<font color="red"> Can not be cleaned </font> <br>') fail += 1 except ValueError: cleaned.append('<font color="red"> Can not be cleaned </font> <br>') fail += 1 except GeocoderTimedOut as e: cleaned.append('<font color="red"> Can not be cleaned </font> <br>') fail += 1 total = success + fail percent = float(success) / float(total) * 100 percent = round(percent, 2) print percent cleaned.append('<br>Accuracy: ' + str(percent) + ' %') cleaned.append('</p></center></body>') return "\n".join(cleaned) **UPDATE:** I have switched to running the application using gunicorn, and this is solving the issue when i'm accessing the application from my home network, however, I am still receiving the TCP error from my work proxy. I am not getting any error message in my console, the browser just displays the TCP error. I can tell that the tool is still working in the background, because I have a print statement in the loop telling me that each address is still being geocoded. Could this be something along the lines of my work network not liking that the page remains loading for a long period of time and then just displays the proxy error page? Answer: Sounds like it is running out of file handles (default limit is 1024 for regular users) which you can check by running `grep 'open' /proc/<webapp pid>` for limit and `ls -1 /proc/<pid>/fd | wc -l` for currently open file handles. I think your code is not sending a correct response which is causing the connections to remain open, eventually running out of open file handles (an open socket is a file on posix systems). Can confirm what state the connections are in with `netstat -an | grep <webapp port>` when you see the issue. It should have a list of 1k+ IPs and ports and their state. Would guess they are in `TIME_WAIT` state which is indicating the client is not closing the connection correctly and it is left up to the kernel to garbage collect them later. Try: from flask import make_response @app.route("/clean", methods=['POST']) def dothing(): addresses = request.form['addresses'] resp = make_response(cleanAddress(addresses), 200) return resp
how do i install pip on python 3.5.2? Question: i downloaded python 3.5.2, i am planning on making a keylogger i need to install pyhook and pywin but i dont know how. every body recommends me to install it by pip but i dont seem to have that module. i open up the idle and import pip, but it gives me the error message saying i dont have that module installed even though people say pip comes with versions 3.4+.. where and how do i install this pip module? i am on a windows ver. 10, 64 bits, python 3.5. any help is aprecciated.. i am new by the way go easy on me.. Answer: You need to make sure the `pip` executable is in your `%PATH%` variable. For me, the `pip` executable is located in the `Scripts` directory of my Python installation. That turned out to be `C:\Python34\Scripts`. So you should find out where this location is for you and then add it to your path variable. [Useful SO answer](http://stackoverflow.com/questions/25328818/python-2-7-cannot-pip-on- windows-bash-pip-command-not-found).
Unable to run scrypt with Python 3.5 Question: After applying the recommended changes to the setup files (listed [here](https://bitbucket.org/mhallin/py-scrypt/issues/23/scrypt-for- python-35)), I successfully installed Scrypt on Python 3.5. But I can't figure out how to actually get it to run or to pass its own tests. It complains that "_scrypt" doesn't exist but, per the scrypt directory, it does. My attempt: C:\Users\cmcka\Desktop\mhallin-py-scrypt-119842611360>python setup.py build running build running build_py creating build creating build\lib.win-amd64-3.5 copying scrypt.py -> build\lib.win-amd64-3.5 running build_ext building '_scrypt' extension creating build\temp.win-amd64-3.5 creating build\temp.win-amd64-3.5\Release creating build\temp.win-amd64-3.5\Release\src creating build\temp.win-amd64-3.5\Release\scrypt-1.1.6 creating build\temp.win-amd64-3.5\Release\scrypt-1.1.6\lib creating build\temp.win-amd64-3.5\Release\scrypt-1.1.6\lib\crypto creating build\temp.win-amd64-3.5\Release\scrypt-1.1.6\lib\scryptenc creating build\temp.win-amd64-3.5\Release\scrypt-1.1.6\lib\util creating build\temp.win-amd64-3.5\Release\scrypt-windows-stubs C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcsrc/scrypt.c /Fobuild\temp.win-amd64-3.5\Release\src/scrypt.obj scrypt.c C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/crypto/crypto_aesctr.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/crypto/crypto_aesctr.obj crypto_aesctr.c C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/crypto/crypto_scrypt-nosse.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/crypto/crypto_scrypt-nosse.obj crypto_scrypt-nosse.c C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/crypto/sha256.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/crypto/sha256.obj sha256.c C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/scryptenc/scryptenc.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/scryptenc/scryptenc.obj scryptenc.c scrypt-1.1.6/lib/scryptenc/scryptenc.c(111): warning C4244: '=': conversion from 'std::size_t' to 'double', possible loss of data scrypt-1.1.6/lib/scryptenc/scryptenc.c(172): warning C4101: 'fd': unreferenced local variable scrypt-1.1.6/lib/scryptenc/scryptenc.c(173): warning C4101: 'lenread': unreferenced local variable C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/scryptenc/scryptenc_cpuperf.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/scryptenc/scryptenc_cpuperf.obj scryptenc_cpuperf.c C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/util/memlimit.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/util/memlimit.obj memlimit.c scrypt-1.1.6/lib/util/memlimit.c(331): warning C4244: '=': conversion from 'double' to 'std::size_t', possible loss of data C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/util/warn.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/util/warn.obj warn.c C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-windows-stubs/gettimeofday.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-windows-stubs/gettimeofday.obj gettimeofday.c C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\OpenSSL-Win64\lib /LIBPATH:C:\Python35\libs /LIBPATH:C:\Python35\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\um\x64" libeay32.lib advapi32.lib /EXPORT:PyInit__scrypt build\temp.win-amd64-3.5\Release\src/scrypt.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/crypto/crypto_aesctr.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/crypto/crypto_scrypt-nosse.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/crypto/sha256.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/scryptenc/scryptenc.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/scryptenc/scryptenc_cpuperf.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/util/memlimit.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/util/warn.obj build\temp.win-amd64-3.5\Release\scrypt-windows-stubs/gettimeofday.obj /OUT:build\lib.win-amd64-3.5\_scrypt.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\src\_scrypt.cp35-win_amd64.lib scrypt.obj : warning LNK4197: export 'PyInit__scrypt' specified multiple times; using first specification Creating library build\temp.win-amd64-3.5\Release\src\_scrypt.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\src\_scrypt.cp35-win_amd64.exp Generating code Finished generating code C:\Users\cmcka\Desktop\mhallin-py-scrypt-119842611360>python setup.py install running install running bdist_egg running egg_info creating scrypt.egg-info writing scrypt.egg-info\PKG-INFO writing top-level names to scrypt.egg-info\top_level.txt writing dependency_links to scrypt.egg-info\dependency_links.txt writing manifest file 'scrypt.egg-info\SOURCES.txt' reading manifest file 'scrypt.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'scrypt.egg-info\SOURCES.txt' installing library code to build\bdist.win-amd64\egg running install_lib running build_py running build_ext creating build\bdist.win-amd64 creating build\bdist.win-amd64\egg copying build\lib.win-amd64-3.5\scrypt.py -> build\bdist.win-amd64\egg copying build\lib.win-amd64-3.5\_scrypt.cp35-win_amd64.pyd -> build\bdist.win-amd64\egg byte-compiling build\bdist.win-amd64\egg\scrypt.py to scrypt.cpython-35.pyc creating stub loader for _scrypt.cp35-win_amd64.pyd byte-compiling build\bdist.win-amd64\egg\_scrypt.py to _scrypt.cpython-35.pyc creating build\bdist.win-amd64\egg\EGG-INFO copying scrypt.egg-info\PKG-INFO -> build\bdist.win-amd64\egg\EGG-INFO copying scrypt.egg-info\SOURCES.txt -> build\bdist.win-amd64\egg\EGG-INFO copying scrypt.egg-info\dependency_links.txt -> build\bdist.win-amd64\egg\EGG-INFO copying scrypt.egg-info\top_level.txt -> build\bdist.win-amd64\egg\EGG-INFO writing build\bdist.win-amd64\egg\EGG-INFO\native_libs.txt zip_safe flag not set; analyzing archive contents... __pycache__._scrypt.cpython-35: module references __file__ creating dist creating 'dist\scrypt-0.7.1-py3.5-win-amd64.egg' and adding 'build\bdist.win-amd64\egg' to it removing 'build\bdist.win-amd64\egg' (and everything under it) Processing scrypt-0.7.1-py3.5-win-amd64.egg creating c:\python35\lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg Extracting scrypt-0.7.1-py3.5-win-amd64.egg to c:\python35\lib\site-packages Adding scrypt 0.7.1 to easy-install.pth file Installed c:\python35\lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg Processing dependencies for scrypt==0.7.1 Finished processing dependencies for scrypt==0.7.1 C:\Users\cmcka\Desktop\mhallin-py-scrypt-119842611360>python setup.py test running test running egg_info writing dependency_links to scrypt.egg-info\dependency_links.txt writing top-level names to scrypt.egg-info\top_level.txt writing scrypt.egg-info\PKG-INFO reading manifest file 'scrypt.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'scrypt.egg-info\SOURCES.txt' running build_ext copying build\lib.win-amd64-3.5\_scrypt.cp35-win_amd64.pyd -> error: [WinError 126] The specified module could not be found C:\Users\cmcka\Desktop\mhallin-py-scrypt-119842611360>python Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:54:25) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scrypt Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\cmcka\Desktop\mhallin-py-scrypt-119842611360\scrypt.py", line 11, in <module> _scrypt = cdll.LoadLibrary(imp.find_module('_scrypt')[1]) File "C:\Python35\lib\ctypes\__init__.py", line 425, in LoadLibrary return self._dlltype(name) File "C:\Python35\lib\ctypes\__init__.py", line 347, in __init__ self._handle = _dlopen(self._name, mode) OSError: [WinError 126] The specified module could not be found >>> ^DC:\Python35\Lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg ^C C:\Users\cmcka\Desktop\mhallin-py-scrypt-119842611360>cd C:\Python35\Lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg C:\Python35\Lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg>dir Volume in drive C is Windows Volume Serial Number is CA5B-5BBB Directory of C:\Python35\Lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg 08/11/2016 02:36 PM <DIR> . 08/11/2016 02:36 PM <DIR> .. 08/11/2016 02:36 PM <DIR> EGG-INFO 08/11/2016 02:36 PM 7,768 scrypt.py 08/11/2016 02:36 PM 31,232 _scrypt.cp35-win_amd64.pyd 08/11/2016 02:36 PM 306 _scrypt.py 08/11/2016 02:36 PM <DIR> __pycache__ 3 File(s) 39,306 bytes 4 Dir(s) 120,141,914,112 bytes free C:\Python35\Lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg> Any suggestions would be greatly appreciated. Edit: The error occurs regardless of where the 'python' command is run. Answer: After using Process Monitor to track the activity of Scrypt on my working desktop and my non-working laptop, I discovered that it wanted OpenSSL's libeay32.dll. In the beginning, I had chosen to install its DLLs in the /bin directory but, unfortunately, Scrypt doesn't check there. After reinstalling OpenSSL and choosing to install those in the Windows directory, it ran just fine. Scrypt is in need of a serious update.
Graphviz executables not found Question: I'm familiar with the various threads that already exist regarding this problem. I'm on a Windows 7 machine. I'm just trying to run the example code to draw a decision tree: from sklearn.datasets import load_iris from sklearn import tree clf = tree.DecisionTreeClassifier() iris = load_iris() clf = clf.fit(iris.data, iris.target) from sklearn.externals.six import StringIO import pydotplus dot_data = StringIO() tree.export_graphviz(clf, out_file=dot_data) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) graph.write_pdf("iris.pdf") I installed graphviz and added it as a PATH variable. I installed pydot (now pydotplus) after installing the python's graphviz library. I still get the error: InvocationException: GraphViz's executables not found Answer: [It looks like the installer isn't setting the PATH variable for you](http://www.graphviz.org/Download_windows.php), you'll need to add the installation folder of Graphviz to your PATH manually.
Simple /etc/shadow Cracker Question: I'm trying to get this shadow file cracker working but I keep getting a TypeError: integer required. I'm sure its the way I'm using the bytearray function. I've tried creating a new object with bytearray for the "word" and the "salt" however to no avail. So then I tried passing the bytearray constructor to the pbkdf2 function and still nothing. I will post the code: #!/usr/bin/python # -*- coding: utf-8 -*- import hashlib, binascii import os,sys import crypt import codecs from datetime import datetime,timedelta import argparse today = datetime.today() # Takes in user and the encrypted passwords and does a simple # Brute Force Attack useing the '==' operator. SHA* is defined by # a number b/w $, the char's b/w the next $ marker would be the # rounds, then the salt, and after that the hashed password. # object.split("some symbol or char")[#], where # is the # location/index within the list def testPass(cryptPass,user): digest = hashlib.sha512 dicFile = open ('Dictionary.txt','r') ctype = cryptPass.split("$")[1] if ctype == '6': print "[+] Hash type SHA-512 detected ..." print "[+] Be patien ..." rounds = cryptPass.split("$")[2].strip('rounds=') salt = cryptPass.split("$")[3] print "[DEBUG]: " + rounds print "[DEBUG]: " + salt # insalt = "$" + ctype + "$" + salt + "$" << COMMENTED THIS OUT for word in dicFile.readlines(): word = word.strip('\n') print "[DEBUG]: " + word cryptWord = hashlib.pbkdf2_hmac(digest().name,bytearray(word, 'utf-8'),bytearray(salt, 'utf-8'), rounds) if (cryptWord == cryptPass): time = time = str(datetime.today() - today) print "[+] Found password for the user: " + user + " ====> " + word + " Time: "+time+"\n" return else: print "Nothing found, bye!!" exit # argparse is used in main to parse arguments pass by the user. # Path to shadow file is required as a argument. def main(): parse = argparse.ArgumentParser(description='A simple brute force /etc/shadow .') parse.add_argument('-f', action='store', dest='path', help='Path to shadow file, example: \'/etc/shadow\'') argus=parse.parse_args() if argus.path == None: parse.print_help() exit else: passFile = open (argus.path,'r', 1) # ADDING A 1 INDICATES A BUFFER OF A for line in passFile.readlines(): # SINGLE LINE '1<=INDICATES line = line.replace("\n","").split(":") # EXACT BUFFER SIZE if not line[1] in [ 'x', '*','!' ]: user = line[0] cryptPass = line[1] testPass(cryptPass,user) if __name__=="__main__": main() OUTPUT: [+] Hash type SHA-512 detected ... [+] Be patien ... [DEBUG]: 65536 [DEBUG]: A9UiC2ng [DEBUG]: hellocat Traceback (most recent call last): File "ShadowFileCracker.py", line 63, in <module> main() File "ShadowFileCracker.py", line 60, in main testPass(cryptPass,user) File "ShadowFileCracker.py", line 34, in testPass cryptWord = hashlib.pbkdf2_hmac(digest().name,bytearray(word, 'utf-8'),bytearray(salt, 'utf-8'), rounds) TypeError: an integer is required Answer: The `rounds` variable needs to be an integer, not a string. The correct line should be: rounds = int(cryptPass.split("$")[2].strip('rounds=')) Also, `strip()` might not be the best method for removing the leading "rounds=". It will work, but it strips a set of characters and not a string. A slightly better method would be: rounds = int(cryptPass.split("$")[2].split("=")[1])
python assign variable, test, if true skip else, if not re-assign variable Question: pulling hair out. (sorry frustrated) ok, so i'm trying to do an if: else in python, only i can't get it to work properly so i'm hoping that someone can help. i'll try and be as specific as possible. what i'm tring to do -> i'm assigning a variable to check if something is true, sometimes it is sometimes it's not. if that variable is true then i want it to skip the else statement and print out the result. (once fully implemented it will return resutlt to script instead of printing) python code: import re os_version = open('/etc/os-release') for line in os_version: OS = re.search(r'\AID_LIKE=\D[A-Za-z]+', line) if OS: print(str(OS.group()).lstrip('ID_LIKE="')) else: OS = re.search(r'\AID=\D[A-Za-z]+', line) print(str(OS.group()).lstrip('ID="')) when i run it like this i get a NoneType object has no group if i have an if OS: statement indented under the else, before the print, then i get both results back, even if i have a 'break' after the first print depending on what platform of linux that i'm running on will depend on what the results should be. example (bash code) #!/bin/bash VAR0=`cat /etc/os-release | grep -w ID_LIKE | cut -f2 -d= |\ tr -d [=\"=] | cut -f1 -d' '` VAR0="${VAR0:=`cat /etc/os-release | grep -w ID | cut -f2 -d= |\ tr -d [=\"=]`}" echo $VAR0 if [ "$VAR0" = 'debian' ]; then echo 'DEBIAN based' elif [ "$VAR0" = 'rhel' ]; then echo 'RHEL based' elif [ "$VAR0" = 'suse' ]; then echo 'SUSE based' else echo 'OTHER based' fi this is the return that comes back from the bash code-> bash -o xtrace os-version.sh ++ cat /etc/os-release ++ grep -w ID_LIKE ++ cut -f2 -d= ++ tr -d '[="=]' ++ cut -f1 '-d ' + VAR0=rhel + VAR0=rhel + echo rhel rhel + '[' rhel = debian ']' + '[' rhel = rhel ']' + echo 'RHEL based' RHEL based from a different os-> bash -o xtrace test_2.sh ++ tr -d '[="=]' ++ cut -f1 '-d ' ++ cut -f2 -d= ++ grep -w ID_LIKE ++ cat /etc/os-release + VAR0= ++ tr -d '[="=]' ++ grep -w ID ++ cut -f2 -d= ++ cat /etc/os-release + VAR0=debian + echo debian debian + '[' debian = debian ']' + echo 'DEBIAN based' DEBIAN based as you can see the VAR0 on the first run is empty and it then goes to the second. that's what i'm trying to do with python. can someone help? i have a feeling that i'm just overlooking something simple after starting at the screen for so long. thanks em edit for joel: my code now and the results=> code: #!/usr/bin/python3 import re os_version = open('/etc/os-release') for line in os_version: OS = re.search(r'\AID_LIKE=\D[A-Za-z]+', line) if OS: print(str(OS.group()).lstrip('ID_LIKE="')) break else: OS = re.search(r'\AID=\D[A-Za-z]+', line) if OS: print(str(OS.group()).lstrip('ID="')) results from a debian system=> python test.os-version_2.py debian results from a debian based system=> python test.os-version_2.py raspbian debian results from centos7=> python test.os-version_2.py centos rhel results from opensuse=> python test.os-version_2.py opensuse suse results from linuxmint(what i'm running now, even though it's ubuntu/debian basied)=> python test.os-version_2.py linuxmint so since some things are responding differently and there are so many different distro's based off of a couple, i'm trying to narrow down the results to the fewest possible. each one has a different way that they do things and to try and get a script that is cross platform working as best as it can, i'm trying to narrow things down. if you 'cat /etc/os-release' you will see that different os's display things differently. edit answer: i don't know why this is yet for me putting a variable to the print statements and then printing the variable it's working. maybe someone can answer that for me. answer code: #!/usr/bin/python3 import re os_version = open('/etc/os-release') for line in os_version: OS = re.search(r'\AID_LIKE=\D[A-Za-z]+', line)#.group() if OS is not None: base_distro = (str(OS.group()).lstrip('ID_LIKE="')) else: OS = re.search(r'\AID=\D[A-Za-z]+', line)#.group() if OS is not None: base_distro = (str(OS.group()).lstrip('ID="')) print(base_distro) if you put that into a script and then run it you will see what i'm looking for. thanks everyone for helping Answer: Getting the system platform is easy with Python 2.7 see [platform import](http://www.pythonforbeginners.com/systems-programming/how-to-use-the- platform-module-in-python) import platform print platform.dist()[0],type(platform.dist()[0]) returns debian <type 'str'> On my platform. To clarify - `OS = re.search(r'\AID_LIKE=\D[A-Za-z]+', line)` returns an object (NoneType) Note "import platform" will be depreciated. Below is method with the new way (just a basic method, might not be suitable to all situations, but it works for what i think you have described - to do your comparisons, just use a `dict` like in the comments Edit - Super simple way - using subprocess if platform doesn't work import subprocess output = subprocess.check_output(["cat", "etc/os-release"]) with open('text.txt', 'w+') as fd: fd.write(output) with open('text.txt', 'r') as text_output: for os_version in text_output: try: print os_version.split("ID_LIKE=",1)[1] except: continue Here we just call the actual command, take the output as a string, chuck it in a file, and read all the lines, splice the line at the key word we want and bam! Want the OS ID? Then add `print os_version.split("ID=",1)[1]` Again the output is: debian raspberry pi
I created an ndb.model with a duplicated property name Question: I have an NDB Model class in Python on App Engine. I just noticed my model's class definition has repeated a property definition. from google.appengine.ext import ndb class Account(ndb.Model): username = ndb.StringProperty() email = ndb.StringProperty() started = ndb.DateTimeProperty(auto_now=False) #... started = ndb.DateTimeProperty(auto_now=False) The bug's been there for a while and never caused an issue while creating objects and saving or reading the _started_ property. Now if I delete one of the copies, then the model won't align with what is was stored in the datastore. What is the correct way to resolve this issue? Answer: Just remove `started` and you are all set. Only one (second) property is actually saved in datastore, you can check this in datastore entities tab
What's a good way to unit test the main function that calls all the simple functions? Question: I have this code with lots of small functions `f_small_1(), f_small_2(), ... f_small_10()`. These are easy to unit test individually. But in the real world, you often have a complex function `f_put_things_together()` that needs to call the smaller ones. What is a good way to unit test `f_put_things_together` ? func f_put_things_together() { a = f_small_1() if a { f_small_2() } else { f_small_3() } f_small_4() ... f_small_10() } I started to write tests but I have the impression that Im doing this twice as I have already tested the smaller functions. I could have `f_put_things_together` take objects a1, a2, ..., a10 as arguments and call `a1.f_small_1()`, `a2.f_small_2()`, ... so that I can mock these objects individually but this doesn't feel right to me: if I didn't have to write unit tests, all these functions would logically belong to the same class, and I don't want to have unclear code for the sake of testing. This is somehow language agnostic and somehow not, as languages like Python enable you to replace methods of an object. So if you have an answer that is language agnostic, that's best. Else Im currently using Go. Answer: The general case that you've shown in your example demonstrates the need to test both the simple functions and the aggregation of the results of those functions. When testing the aggregating function, you really want to fake the results of the smaller functions the aggregating function depends on. So, you're on the right track. However, if you're having trouble writing unit tests for your code, then you're probably having one of these classes of problems: * You've somehow violated the SOLID principles ([description here](http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod)). In other words, something is deficient in the micro-architecture of your code. * You're trying to fake someone else's interface and you're having trouble matching the actual behavior of their implementation with your fake implementation. (This doesn't seem to be the case here). * The objects that you're testing with require a bunch of data setup that should be simplified, at least within the context of testing (also, doesn't appear to be the case). **If you're tests are painful to write, they're telling you something!** With experience, you'll be able to quickly pick up on the pain point in your implementation that the tests are indicating. Unfortunately, your example is a bit small and abstract. To be more precise, I don't know what `f_small_1` ... `f_small_10` do. So, with more details, I might make more precise recommendations for doing some small refactoring that could have a big payoff for your testing. I can say, however, that it appears that `f_put_things_together` looks a bit big to me. This could be a violation of the Single Responsibility Principle (the 'S' in SOLID). I see 10 function calls at a minimum along with some branching logic. You'll need to write a separate test for each branch path through your function. The less branching you have in a particular function, the less tests you'll need to write. For more information, take a look at [Cyclomatic Complexity](https://en.wikipedia.org/wiki/Cyclomatic_complexity). In this case, it seems the method has a low CC, so this likely isn't the problem. The ten calls to smaller functions do make me wonder a bit. It looks like for simplicity you've left out capturing the return value of these function calls and the logic for aggregating the results. In that case, yes you really do want to fake the results of the smaller functions then write a few tests to check the algorithm you're using to aggregate everything. Or, perhaps the functions are all void and you need to verify that everything happened, and maybe that it happened in the right order. In that case, you're looking at writing more of an interaction-based test. You'll still want to put those smaller function calls behind an interface / class / object that you fake. In this case, the fake should capture the calls and the call order so that your test can make the assertions that are really important. If some of the smaller functions are related to each other, it might make sense to group them together in a single method a separate class. Then, your test for `f_put_things_together` will have fewer dependencies that need to be faked. You will have a new class that also needs tested, but it's much easier to test two smaller methods than to test one large one that has too much responsibility. ### Summary This is actually a very good question with the exception of it being a bit vague. If you can provide a more detailed example, perhaps I could make more detailed recommendations. The bottom line is this: If your tests are difficult to write then either you need some help / coaching on writing tests or something about the design of your implementation is off and your tests are trying to tell you what it is.
python regular expression - re.findall() in list Question: This is my list: lista=[u'REG_S_3_UMTS_0_0 (RNC)', u'REG_S_3_UMTS_0_1 (RNC)', u'REG_S_3_UMTS_0_2 (RNC)', u'REG_S_2_GSM_NORT_CBSP_bsc_0_0 (BSC)', u'REG_S_2_GSM_NORT_CBSP_bsc_0_1 (BSC)', u'REG_S_2_GSM_NORT_CBSP_bsc_0_2 (BSC)', u'REG_S_3_GSM_ERIC_CBSP_bsc_0_0 (BSC)', u'REG_S_3_GSM_ERIC_CBSP_bsc_0_1 (BSC)', u'REG_S_3_GSM_ERIC_CBSP_bsc_0_2 (BSC)', u'REG_S_3_GSM_HUAP_CBSM_bsc_0_0 (BSC)', u'REG_S_3_GSM_HUAP_CBSM_bsc_0_1 (BSC)', u'REG_S_3_GSM_HUAP_CBSM_bsc_0_2 (BSC)', u'REG_S_3_GSM_HUA_CBSM_bsc_0_0 (BSC)', u'REG_S_3_GSM_HUA_CBSM_bsc_0_1 (BSC)', u'REG_S_3_GSM_HUA_CBSM_bsc_0_2 (BSC)', u'REG_S_3_GSM_IPAC_SABP_bsc_0_0 (BSC)', u'REG_S_3_GSM_IPAC_SABP_bsc_0_1 (BSC)', u'REG_S_3_GSM_IPAC_SABP_bsc_0_2 (BSC)', u'REG_S_3_GSM_NOKI_CLNS_bsc_0_0 (BSC)', u'REG_S_3_GSM_NOKI_CLNS_bsc_0_1 (BSC)', u'REG_S_3_GSM_NOKI_CLNS_bsc_0_2 (BSC)', u'REG_S_3_GSM_NOKI_RFC1_bsc_0_0 (BSC)', u'REG_S_3_GSM_NOKI_RFC1_bsc_0_1 (BSC)', u'REG_S_3_GSM_NOKI_RFC1_bsc_0_2 (BSC)', u'REG_S_3_GSM_NORT_CBSP_bsc_0_0 (BSC)', u'REG_S_3_GSM_NORT_CBSP_bsc_0_1 (BSC)', u'REG_S_3_GSM_NORT_CBSP_bsc_0_2 (BSC)', u'REG_S_3_GSM_SIEM_BSCI_bsc_0_0 (BSC)', u'REG_S_3_GSM_SIEM_BSCI_bsc_0_1 (BSC)', u'REG_S_3_GSM_SIEM_BSCI_bsc_0_2 (BSC)', u'REG_S_GSM_ERIC_CBSP_bsc_0_0 (BSC)', u'REG_S_GSM_ERIC_CBSP_bsc_0_1 (BSC)', u'REG_S_GSM_ERIC_CBSP_bsc_0_2 (BSC)', u'REG_S_GSM_HUAP_CBSM_bsc_0_0 (BSC)', u'REG_S_GSM_HUAP_CBSM_bsc_0_1 (BSC)', u'REG_S_GSM_HUAP_CBSM_bsc_0_2 (BSC)', u'REG_S_GSM_HUA_CBSM_bsc_0_0 (BSC)', u'REG_S_GSM_HUA_CBSM_bsc_0_1 (BSC)', u'REG_S_GSM_HUA_CBSM_bsc_0_2 (BSC)', u'REG_S_GSM_NORT_CBSP_bsc_0_0 (BSC)', u'REG_S_GSM_NORT_CBSP_bsc_0_1 (BSC)', u'REG_S_GSM_NORT_CBSP_bsc_0_2 (BSC)', u'Pool ID: 200'] And that's my function: def Filter_List(lista): string = ''.join(lista) match = re.findall(r"\(([A-Z]+)\)|Pool ID", string) return match As a result I get: [u'RNC', u'RNC', u'RNC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u'BSC', u''] But the last element (it should be: Pool ID) doesn't display.There's only: u'' . Does anyone know how should I change my expression? Thanks in advance !!! Answer: Note that [**`re.findall`**](https://docs.python.org/2/library/re.html#re.findall) returns (list of) tuples if a regex pattern has capture groups defined. Remove it: import re lista=[u'REG_S_3_UMTS_0_0 (RNC)', u'REG_S_3_UMTS_0_1 (RNC)', u'REG_S_3_UMTS_0_2 (RNC)', u'REG_S_2_GSM_NORT_CBSP_bsc_0_0 (BSC)', u'REG_S_2_GSM_NORT_CBSP_bsc_0_1 (BSC)', u'REG_S_2_GSM_NORT_CBSP_bsc_0_2 (BSC)', u'REG_S_3_GSM_ERIC_CBSP_bsc_0_0 (BSC)', u'REG_S_3_GSM_ERIC_CBSP_bsc_0_1 (BSC)', u'REG_S_3_GSM_ERIC_CBSP_bsc_0_2 (BSC)', u'REG_S_3_GSM_HUAP_CBSM_bsc_0_0 (BSC)', u'REG_S_3_GSM_HUAP_CBSM_bsc_0_1 (BSC)', u'REG_S_3_GSM_HUAP_CBSM_bsc_0_2 (BSC)', u'REG_S_3_GSM_HUA_CBSM_bsc_0_0 (BSC)', u'REG_S_3_GSM_HUA_CBSM_bsc_0_1 (BSC)', u'REG_S_3_GSM_HUA_CBSM_bsc_0_2 (BSC)', u'REG_S_3_GSM_IPAC_SABP_bsc_0_0 (BSC)', u'REG_S_3_GSM_IPAC_SABP_bsc_0_1 (BSC)', u'REG_S_3_GSM_IPAC_SABP_bsc_0_2 (BSC)', u'REG_S_3_GSM_NOKI_CLNS_bsc_0_0 (BSC)', u'REG_S_3_GSM_NOKI_CLNS_bsc_0_1 (BSC)', u'REG_S_3_GSM_NOKI_CLNS_bsc_0_2 (BSC)', u'REG_S_3_GSM_NOKI_RFC1_bsc_0_0 (BSC)', u'REG_S_3_GSM_NOKI_RFC1_bsc_0_1 (BSC)', u'REG_S_3_GSM_NOKI_RFC1_bsc_0_2 (BSC)', u'REG_S_3_GSM_NORT_CBSP_bsc_0_0 (BSC)', u'REG_S_3_GSM_NORT_CBSP_bsc_0_1 (BSC)', u'REG_S_3_GSM_NORT_CBSP_bsc_0_2 (BSC)', u'REG_S_3_GSM_SIEM_BSCI_bsc_0_0 (BSC)', u'REG_S_3_GSM_SIEM_BSCI_bsc_0_1 (BSC)', u'REG_S_3_GSM_SIEM_BSCI_bsc_0_2 (BSC)', u'REG_S_GSM_ERIC_CBSP_bsc_0_0 (BSC)', u'REG_S_GSM_ERIC_CBSP_bsc_0_1 (BSC)', u'REG_S_GSM_ERIC_CBSP_bsc_0_2 (BSC)', u'REG_S_GSM_HUAP_CBSM_bsc_0_0 (BSC)', u'REG_S_GSM_HUAP_CBSM_bsc_0_1 (BSC)', u'REG_S_GSM_HUAP_CBSM_bsc_0_2 (BSC)', u'REG_S_GSM_HUA_CBSM_bsc_0_0 (BSC)', u'REG_S_GSM_HUA_CBSM_bsc_0_1 (BSC)', u'REG_S_GSM_HUA_CBSM_bsc_0_2 (BSC)', u'REG_S_GSM_NORT_CBSP_bsc_0_0 (BSC)', u'REG_S_GSM_NORT_CBSP_bsc_0_1 (BSC)', u'REG_S_GSM_NORT_CBSP_bsc_0_2 (BSC)', u'Pool ID: 200'] string = ''.join(lista) match = re.findall(r"\([A-Z]+\)|Pool ID", string) print(match) See [this Python demo](https://ideone.com/3lrsnj) returning `[u'(RNC)', u'(RNC)', u'(RNC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'(BSC)', u'Pool ID']`
Changing variables of imported module doesnt seem to work. [Python] Question: I am trying to make a separate dictionary in a python file, so I could import it and do stuff such as search key, add word, remove word. The file is dictionary.py . It contains: #-*-coding:utf8-*- dict={} So when I import it in my main python file, add a word and print it, it works fine, but the dictionary.py file doesn't gets affected. I want the "dict" variable in dictionary.py file to change too. Here's my code for the main file: import os import sys import dictionary as tdict print ":::::::::::::::::::" print "Add word" print ":::::::::::::::::::" name=raw_input("Word: ") print mean=raw_input ("Meaning: ") tdict.dict[name]=mean print "::::Word added::::" print print "::::::::::::::::::::::" print "Dictionary:" print for x in tdict.dict: print "",x,":",tdict.dict[x]` Any help? thx Answer: You are not writing to the file at any point. The correct way to store a python object such as a dictionary is to use `pickle`: import pickle myDictionary = {} dictionaryFile = open('dictionary', 'wb') pickle.dump(mydictionary, dictionaryFile) dictionaryFile.close() This will create a pickle file called `dictionary` that contains `myDictionary` To read the dictionary: import pickle dictionaryFile = open('dictionary', 'rb') myDictionary = pickle.load(dictionaryFile) dictionaryFile.close() Then you can make whatever changes you want to the dictionary then save it to the file again: import pickle dictionaryFile = open('dictionary', 'rb') myDictionary = pickle.load(dictionaryFile) dictionaryFile.close() myDictionary['key1'] = 'value1' dictionaryFile = open('dictionary', 'wb') pickle.dump(mydictionary, dictionaryFile) dictionaryFile.close()
Get package version for conda meta.yaml from source file Question: I'm trying to reorganize my python package versioning so I only have to update the version in one place, preferably a python module or a text file. For all the places I need my version there seems to be a way to load it from the source `from mypkg import __version__` or at least parse it out of the file as text. I can't seem to find a way to do it with my conda meta.yaml file though. Is there a way to load the version from an external source in the meta.yaml file? I know there are the git environment variables, but I don't want to tag every alpha/beta/rc commit that gets tested through out local conda repository. I could load the python object using `!!python/object` in pyyaml, but conda doesn't support arbitrary python execution. I don't see a way to do it with any other jinja2 features. I could also write a script to update the version number in more than one place, but I was really hoping to only modify one file as the definitive version number. Thanks for any help. Answer: There are lots of ways to get to your endpoint. Here's what conda itself does... The source of truth for conda's version information is `__version__` in `conda/__init__.py`. It can be loaded programmatically within python code as `from conda import __version__` as you suggest. It's also hard-wired into `setup.py` [here](https://github.com/conda/conda/blob/5a8e020/setup.py#L56) (note [this code](https://github.com/conda/conda/blob/5a8e020/setup.py#L25-L31) too), so from the command line `python setup.py --version` is the canonical way to get that information. In 1.x versions of conda-build, putting a line $PYTHON setup.py --version > __conda_version__.txt in `build.sh` would set the version for the built package using our source of truth. **The`__conda_version__.txt` file is deprecated**, however, and it will likely be removed with the release of conda-build 2.0. In recent versions of conda-build, the preferred way to do this is to use `load_setup_py_data()` within a jinja2 context, which will give you access to all the metadata from `setup.py`. Specifically, in the `meta.yaml` file, we'd have something like this package: name: conda version: "{{ load_setup_py_data().version }}" * * * Now, how the `__version__` variable is set in `conda/__init__.py`... What you [see in the source code](https://github.com/conda/conda/blob/5a8e020/conda/__init__.py#L23) is a call to the [`auxlib.packaging.get_version()`](https://github.com/kalefranz/auxlib/blob/8523d3f/auxlib/packaging.py#L144) function. This function does the following in order 1. look first for a file `conda/.version`, and if found return the contents as the version identifier 2. look next for a `VERSION` environment variable, and if set return the value as the version identifier 3. look last at the `git describe --tags` output, and return a version identifier if possible (must have git installed, must be a git repo, etc etc) 4. if none of the above yield a version identifier, return `None` Now there's just one more final trick. In conda's [`setup.py` file](https://github.com/conda/conda/blob/8523d3f/setup.py#L78-L81), we set `cmdclass` for `build_py` and `sdist` to those provided by `auxlib.packaging`. Basically we have from auxlib import packaging setup( cmdclass={ 'build_py': packaging.BuildPyCommand, 'sdist': packaging.SDistCommand, } ) These special command classes actually modify the `conda/__init__.py` file in built/installed packages so the `__version__` variable is hard-coded to a string literal, and doesn't use the `auxlib.packaging.get_version()` function. * * * In your case, with not wanting to tag every release, you could use all of the above, and from the command line set the version using a `VERSION` environment variable. Something like VERSION=1.0.0alpha1 conda build conda.recipe In your `build` section meta.yaml recipe, you'll need add a `script_env` key to tell conda-build to pass the `VERSION` environment variable all the way through to the build environment. build: script_env: - VERSION
How to disable showing visibility symbols in Tagbar for a specific filetype? Question: I want [`g:tagbar_show_visibility`](https://github.com/majutsushi/tagbar/blob/master/doc/tagbar.txt) be set to `'0'` for Python files as there's no public/protected/private in Python. How can I configure Vim this way? Answer: You can customize `ctagsargs` for a particular filetype, making `ctags` not output the 'visibility' information for tags in the first place, e.g.: let g:tagbar_type_python = { \ 'ctagsargs' : '-f - --excmd=pattern --fields=nksSmt' \ } The important bit here is the `--fields` option, which specifies the fields to be included for each tag.
AttributeError: 'Ui_Login' object has no attribute 'GetTable' Question: Im creating a login and signup page that links to other GUI using PyQt. Most of the code works but when i try to sign up a new user, it gives me a **AttributeError: 'Ui_Login' object has no attribute 'GetTable'**. GetTable is defined in the code for the databse created with MySQl whic is called into the Login class and Signup class. Please in new to python and programming in general. Sorry if this question seems daft. but i have read a lot on previously asked ones and i cant seem to understand what it is saying. Thanks # -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'Login.ui' # # Created by: PyQt4 UI code generator 4.11.4 # # WARNING! All changes made in this file will be lost! import sys import DBmanager as db from PyQt4 import QtCore, QtGui from newuser import Ui_Form from createprofile import Ui_StudentLogin try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: def _fromUtf8(s): return s try: _encoding = QtGui.QApplication.UnicodeUTF8 def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig, _encoding) except AttributeError: def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig) #######SIGN IN/ LOG IN################################################################################################# class Ui_Login(QtGui.QWidget): def __init__(self): QtGui.QWidget.__init__(self) self.dbu = db.DatabaseUtility('UsernamePassword_DB', 'masterTable') self.setupUi(self) self.confirm = None def setupUi(self, Login): Login.setObjectName(_fromUtf8("Form")) Login.resize(400, 301) self.label = QtGui.QLabel(Login) self.label.setGeometry(QtCore.QRect(60, 60, 71, 21)) font = QtGui.QFont() font.setBold(True) font.setWeight(75) self.label.setFont(font) self.label.setObjectName(_fromUtf8("label")) self.label_2 = QtGui.QLabel(Login) self.label_2.setGeometry(QtCore.QRect(60, 120, 81, 21)) font = QtGui.QFont() font.setBold(True) font.setWeight(75) self.label_2.setFont(font) self.label_2.setObjectName(_fromUtf8("label_2")) self.userName = QtGui.QLineEdit(Login) self.userName.setGeometry(QtCore.QRect(140, 60, 151, 21)) self.userName.setObjectName(_fromUtf8("userName")) self.passWord = QtGui.QLineEdit(Login) self.passWord.setGeometry(QtCore.QRect(140, 120, 151, 21)) self.passWord.setObjectName(_fromUtf8("passWord")) self.label_3 = QtGui.QLabel(Login) self.label_3.setGeometry(QtCore.QRect(160, 10, 131, 21)) font = QtGui.QFont() font.setPointSize(10) font.setBold(True) font.setWeight(75) self.label_3.setFont(font) self.label_3.setObjectName(_fromUtf8("label_3")) self.loginButton1 = QtGui.QPushButton(Login) self.loginButton1.setGeometry(QtCore.QRect(40, 210, 75, 23)) font = QtGui.QFont() font.setBold(True) font.setWeight(75) self.loginButton1.setFont(font) self.loginButton1.setObjectName(_fromUtf8("loginButton1")) self.loginButton1.clicked.connect(self.login_Button1) self.signUpButton = QtGui.QPushButton(Login) self.signUpButton.setGeometry(QtCore.QRect(160, 210, 75, 23)) font = QtGui.QFont() font.setBold(True) font.setWeight(75) self.signUpButton.setFont(font) self.signUpButton.setObjectName(_fromUtf8("signUpButton")) self.signUpButton.clicked.connect(self.signUp_Button) self.cancel1 = QtGui.QPushButton(Login) self.cancel1.setGeometry(QtCore.QRect(280, 210, 75, 23)) font = QtGui.QFont() font.setBold(True) font.setWeight(75) self.cancel1.setFont(font) self.cancel1.setObjectName(_fromUtf8("cancel1")) self.connect(self, QtCore.SIGNAL('triggered()'), self.cancel_1) self.retranslateUi(Login) QtCore.QMetaObject.connectSlotsByName(Login) def retranslateUi(self, Login): Login.setWindowTitle(_translate("Form", "Login", None)) self.label.setText(_translate("Form", "USERNAME", None)) self.label_2.setText(_translate("Form", "PASSWORD", None)) self.label_3.setText(_translate("Form", "LOGIN PAGE", None)) self.loginButton1.setText(_translate("Form", "LOGIN", None)) self.signUpButton.setText(_translate("Form", "SIGN UP", None)) self.cancel1.setText(_translate("Form", "CANCEL", None)) @QtCore.pyqtSignature("on_cancel1_clicked()") def cancel_1(self): self.close() @QtCore.pyqtSignature("on_loginButton1_clicked()") def login_Button1(self): username = self.userName.text() password = self.passWord.text() if not username: QtGui.QMessageBox.warning(self, 'Guess What?', 'Username Missing!') elif not password: QtGui.QMessageBox.warning(self, 'Guess What?', 'Password Missing!') else: self.AttemptLogin(username, password) def AttemptLogin(self, username, password): t = self.dbu.GetTable() print (t) for col in t: if username == col[1]: if password == col[2]: QtGui.QMessageBox.information(self, 'WELCOME', 'Success!!') self.createProfilePage() self.close() else: QtGui.QMessageBox.warning(self, 'OOPS SORRY!', 'Password incorrect...') return def createProfilePage(self): self.createprofile = Ui_StudentLogin() self.createprofile.show() @QtCore.pyqtSignature("on_signUpButton_clicked()") def signUp_Button(self): self.newuser = Ui_Form(self) self.newuser.show() #######SIGN UP/ CREATE NEW USER################################################################################################# class Ui_Form(QtGui.QWidget): def __init__(self,dbu): QtGui.QWidget.__init__(self) self.setupUi(self) self.dbu = dbu def setupUi(self, Form): Form.setObjectName(_fromUtf8("Form")) Form.resize(400, 300) self.label = QtGui.QLabel(Form) self.label.setGeometry(QtCore.QRect(60, 70, 51, 16)) font = QtGui.QFont() font.setBold(True) font.setWeight(75) self.label.setFont(font) self.label.setObjectName(_fromUtf8("label")) self.nameGet = QtGui.QLineEdit(Form) self.nameGet.setGeometry(QtCore.QRect(120, 70, 191, 21)) self.nameGet.setObjectName(_fromUtf8("nameGet")) self.label_2 = QtGui.QLabel(Form) self.label_2.setGeometry(QtCore.QRect(50, 120, 46, 13)) font = QtGui.QFont() font.setBold(True) font.setWeight(75) self.label_2.setFont(font) self.label_2.setObjectName(_fromUtf8("label_2")) self.label_3 = QtGui.QLabel(Form) self.label_3.setGeometry(QtCore.QRect(30, 170, 71, 16)) font = QtGui.QFont() font.setBold(True) font.setWeight(75) self.label_3.setFont(font) self.label_3.setObjectName(_fromUtf8("label_3")) self.regNoGet = QtGui.QLineEdit(Form) self.regNoGet.setGeometry(QtCore.QRect(120, 120, 191, 21)) self.regNoGet.setObjectName(_fromUtf8("regNoGet")) self.passWordGet = QtGui.QLineEdit(Form) self.passWordGet.setGeometry(QtCore.QRect(120, 170, 191, 21)) self.passWordGet.setObjectName(_fromUtf8("passWordGet")) self.label_4 = QtGui.QLabel(Form) self.label_4.setGeometry(QtCore.QRect(100, 20, 181, 21)) font = QtGui.QFont() font.setPointSize(10) font.setBold(True) font.setWeight(75) self.label_4.setFont(font) self.label_4.setObjectName(_fromUtf8("label_4")) self.nextButton = QtGui.QPushButton(Form) self.nextButton.setGeometry(QtCore.QRect(140, 250, 75, 23)) font = QtGui.QFont() font.setBold(True) font.setWeight(75) self.nextButton.setFont(font) self.nextButton.setObjectName(_fromUtf8("nextButton")) self.nextButton.clicked.connect(self.next_Button) self.cancelButton = QtGui.QPushButton(Form) self.cancelButton.setGeometry(QtCore.QRect(260, 250, 75, 23)) font = QtGui.QFont() font.setBold(True) font.setWeight(75) self.cancelButton.setFont(font) self.cancelButton.setObjectName(_fromUtf8("cancelButton")) self.cancelButton.clicked.connect(self.cancel_Button) self.retranslateUi(Form) QtCore.QMetaObject.connectSlotsByName(Form) def retranslateUi(self, Form): Form.setWindowTitle(_translate("Form", "New User", None)) self.label.setText(_translate("Form", "NAME", None)) self.label_2.setText(_translate("Form", "REG. NO", None)) self.label_3.setText(_translate("Form", "PASSWORD", None)) self.label_4.setText(_translate("Form", " CREATE NEW USER", None)) self.nextButton.setText(_translate("Form", "SUBMIT", None)) self.cancelButton.setText(_translate("Form", "CANCEL", None)) @QtCore.pyqtSignature("on_cancelButton_clicked()") def cancel_Button(self): self.close() @QtCore.pyqtSignature("on_nextButton_clicked()") def next_Button(self): username = self.nameGet.text() password = self.passWordGet.text() regno = self.regNoGet.text() if not username: QtGui.QMessageBox.warning(self, 'Warning', 'Username Missing') elif not password: QtGui.QMessageBox.warning(self, 'Warning!', 'Password Missing') elif not regno: QtGui.QMessageBox.warning(self, 'Warning!', 'RegNo Missing') else: t = self.dbu.GetTable() print (t) for col in t: if username == col[1]: QtGui.QMessageBox.warning(self, 'Dang it!', 'Username Taken. :(') else: self.dbu.AddEntryToTable (username, password, regno) QtGui.QMessageBox.information(self, 'Awesome!!', 'User Added SUCCESSFULLY!') self.close() ## def createProfileWindow(self): ## self.createprofile = Ui_StudentLogin() ## self.createprofile.show() ## ## def generate_report(self): ## data_line1 = self.nameGet.displayText() ## data_line2 = self.regNoGet.displayText() ## data_line3 = self.passWordGet.displayText() ## print data_line1 ## print data_line2 ## print data_line3 ## if __name__ == '__main__': app = QtGui.QApplication(sys.argv) ex = Ui_Login() ex.show() sys.exit(app.exec_()) Answer: Your code expects this line: db.DatabaseUtility('UsernamePassword_DB', 'masterTable') to return an object with a `GetTable` method (and assign that object to `self.dbu`). Therefore when you call `self.dbu.GetTable()` you get an error if whatever was returned from: db.DatabaseUtility('UsernamePassword_DB', 'masterTable') doesn't have that method. So check what: db.DatabaseUtility('UsernamePassword_DB', 'masterTable') actually returns and adjust your code accordingly.
Error of running an executable file from Python subprosess Question: I am trying to run an executable file (a linear programming solver CLP.exe) from Python 3.5. Import subprocess exeFile = " C:\\MyPath\\CLP.exe" arg1 = "C:\\Temp\\LpModel.mps" arg2 = "-max" arg3 = " -dualSimplex" arg4 = " -printi all" arg5 = "-solution t solutionFile.txt" subprocess.check_output([exeFile, arg1, arg2, arg3, arg4, arg5], stderr=subprocess.STDOUT, shell=False) When I run the python file in Eclipse PyDev, I can see the results in Eclipse console. But, no solution results are saved at the file of "solutionFile.txt". In the Eclipse console, I got: b'Coin LP version 1.16, build Dec 25 2015 command line - C:\\MyPath\\clp.exe C:\\Temp\\LpModel.mps -max -dualSimplex -printi all -solution C:\\Temp\\solution.txt At line 1 NAME ClpDefau At line 2 ROWS At line 5 COLUMNS At line 8 RHS At line 10 BOUNDS At line 13 ENDATA Problem ClpDefau has 1 rows, 2 columns and 2 elements Model was imported from C:\\Temp\\LpModel.mps in 0.001 seconds No match for -max - ? for list of commands No match for -dualSimplex - ? for list of commands No match for -printi all - ? for list of commands No match for -solution C:\\Temp\\solution.txt - ? for list of commands Presolve 0 (-1) rows, 0 (-2) columns and 0 (-2) elements Empty problem - 0 rows, 0 columns and 0 elements Optimal - objective value 4 After Postsolve, objective 4, infeasibilities - dual 0 (0), primal 0 (0) Optimal objective 4 - 0 iterations time 0.002, Presolve 0.00 When I run the command in MS windows shell from command line: C:\\MyPath\\clp.exe C:\\Temp\\LpModel.mps -max -dualSimplex -printi all -solution C:\\Temp\\solution.txt I can get results in the solution file. And, the bold lines do not appear in the output if I run the command in the command line. Why the solition.txt file was not created and no solutions results were saved to it if I run the command from Python subprocess ? Answer: Every space separated token needs to be another argument in the array for `subprocess.check_output` exeFile = " C:\\MyPath\\CLP.exe" subprocess.check_output([ exeFile, "C:\\Temp\\LpModel.mps", "-max", "-dualSimplex", "-printi", "all", "-solution", "t", "solutionFile.txt"], stderr=subprocess.STDOUT, shell=False)
sending more than1000 requests per second in python Question: I am trying to send requests concurrently to a server and then record the average latency using this code: import Queue import time import threading import urllib2 data = "{"image_1":"abc/xyz.jpg"}" headers = {.....} def get_url(q, url): num = 1 sum = 0 while num <= 200: start = time.time() req = urllib2.Request(url, data, headers) response = urllib2.urlopen(req) end = time.time() print end - start num = num + 1 q.put(response.read()) sum = sum + (end - start) print sum theurls = ["http://example.com/example"] q = Queue.Queue() for u in theurls: t = threading.Thread(target = get_url, args = (q, u)) t.daemon = True t.start() while True: s = q.get() print s This code is working just fine, but now I intend to send more than 1000 requests per second. I came across [this answer](http://stackoverflow.com/a/32029760/5080347) but I am not sure how do I use `grequests` for my case. Some insights will be very helpful. Thanks Answer: With qrequests you do not need to implement threading or queues. Grequests handles that all for you. Just do a normal request but use grequests instead like shown in the example.
Run and log the output of 'ping' in a loop in a Python script Question: i wrote a simple agaent in python that all it dose is just cheacks for the internet connection. When he finds out that ther is no connection he writes a log file to a text the hour and date and then just exit the program. I want it to keep testing if there is a connection even if there is not how can i do this ? without the program exit this is the code: import os import time def Main(): ping =os.system('ping -n 1 -l 1000 8.8.8.8 ') while ping ==0: time.sleep(4) ping = os.system('ping -n 1 -l 1000 8.8.8.8 ') if ping ==1: print 'no connection' CT =time.strftime("%H:%M:%S %d/%m/%y") alert=' No Connection' f = open('logfile.txt','a+') f.write('\n'+CT) f.write(alert) f.close() if __name__ == "__main__": Main() Thanx a lot. Answer: Wrap the `Main` call in an _infinite loop_? if __name__ == "__main__": while True: Main() time.sleep(1) # optional, as Main already contains a sleep time
How to execute one python file with the arguments from another python file without passing arguments on command/terminal line? Question: I have a default code in Python that has the argument parser and have to pass those argument by myself in command line,but i dont want to pass the arguments myself rather execute that file from another python file with the arguments written in that file or i dont want to write those arguments myself on command lines. My argument parsing code is as follows: if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument('-i', type=str, nargs='+', help="Enter the filenames with extention of an Image") arg=parser.parse_args() if not len(sys.argv) > 1: parser.print_help() exit() my function is def Predict_Emotion(filename): print "Opening image...." try: img=io.imread(filename) cvimg=cv2.imread(filename) except: print "Exception: File Not found." return And my execution line is as for filename in arg.i: Predict_Emotion(filename) Any help would be appreciated. Answer: Here is how you do it. You need to slightly modify the above code to form the following import sys, argparse def run_function (lista): parser = argparse.ArgumentParser() parser.add_argument('-i', type=str, nargs='+', help="Enter the filenames with extention of an Image") arg=parser.parse_args(lista) if (len(sys.argv) <= 1): parser.print_help() exit() if __name__ == "__main__": run_function (sys.argv) Reference used: <https://docs.python.org/2/library/argparse.html> That will now allow you to give it your own list from another file. To call it from the other file, you need to do the following #These below two lines are not necessary if stating python in directory the script is in, or if it is already in your Python Path import sys sys.path.insert(0, "path to directory script is in") #Below lines are now necessary import your_script #NOTE this is the name of your file without the .py extention on the end list_of_files = ["file1", "file2"] your_script.run_function(list_of_files) I think that is everything you asked!
Python unicode utf-8 || PRINT prints "wąż" correctly, but in a List1 the same "wąż" string is printed as 'w\xc4\x85\xc5\xbc' Question: I declared the utf-8 encoding and when I `print "wąż"` or other uncommon characters, terminal properly prints out "wąż". But when I have a list with a string "wąż" and print the whole list, I get `'w\xc4\x85\xc5\xbc'`. The code: #!/usr/bin/env python # -*- coding: utf-8 -*- list1 = ['wąż'] But when I print the whole list1: >>>print list1 ['w\xc4\x85\xc5\xbc'] When I print list1[0] or simply print the string "wąż", it prints correctly: >>>print list1[0] >>>print "wąż" wąż wąż * * * _An hour later..._ Okay so I tried to encode the list in utf-8 with `[x.encode('utf-8') for x in list1]`, but this threw me an Error: _UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 1: ordinal not in range(128)._ So I checked my current encoding with the code below, and turns out I had `ascii`. import sys reload(sys) print sys.getdefaultencoding() So I change the encoding to `utf-8` with: `sys.setdefaultencoding("utf-8")` and it properly prints that I have `utf-8` right now. So again I go with: `>>>[x.encode('utf-8') for x in list1]` `>>>print list1` `['w\xc4\x85\xc5\xbc']` But it changes nothing. It still refuses to display the correct characters. Answer: Try this: >> meh = u'wąż'.encode('utf-8') >> print meh.decode('utf-8') wąż So you basically encode/decode the unicode on the basis of a specified encoding. Its described well here: <https://docs.python.org/2/howto/unicode.html#the- unicode-type>
Finding Excel cell reference using Python Question: [Here is the Excel file in question:](http://i.stack.imgur.com/fwVen.png) Context: I am writing a program which can pull values from a PDF and put them in the appropriate cell in an Excel file. Question: I want to write a function which takes a column value (e.g. 2014) and a row value (e.g. 'COGS') as arguments and return the cell reference where those two intersect (e.g. 'C3' for 2014 COGS). def find_correct_cell(year=2014, item='COGS'): #do something similar to what the =match function in Excel does return cell_reference #returns 'C3' [I have already tried using openpyxl like this to change the values of some random empty cells where I can store these values:](http://i.stack.imgur.com/8N9RE.png) col_num = '=match(2014, A1:E1)' row_num = '=match("COGS", A1:A5)' But I want to grab those values without having to arbitrarily write to those random empty cells. Plus, even with this method, when I read those cells (F5 and F6) it reads the formulae in those cells and not the face value of 3. Any help is appreciated, thanks. Answer: There are a surprising number of details you need to get right to manipulate Excel files this way with openpyxl. First, it's worth knowing that the xlsx file contains two representations of each cell - the formula, and the current value of the formula. openpyxl can return either, and if you want values you should specify `data_only=True` when you open the file. Also, openpyxl is not able to calculate a new value when you change the formula for a cell - only Excel itself can do that. So inserting a MATCH() worksheet function won't solve your problem. The code below does what you want, mostly in Python. It uses the "A1" reference style, and does some calculations to turn column numbers into column letters. This won't hold up well if you go past column Z. In that case, you may want to switch to numbered references to rows and columns. There's some more info on that [here](http://openpyxl.readthedocs.io/en/default/tutorial.html) and [here](http://stackoverflow.com/questions/12902621/getting-the-row-and-column- numbers-from-coordinate-value-in-openpyxl). But hopefully this will get you on your way. Note: This code assumes you are reading a workbook called 'test.xlsx', and that 'COGS' is in a list of items in 'Sheet1!A2:A5' and 2014 is in a list of years in 'Sheet1!B1:E1'. import openpyxl def get_xlsx_region(xlsx_file, sheet, region): """ Return a rectangular region from the specified file. The data are returned as a list of rows, where each row contains a list of cell values""" # 'data_only=True' tells openpyxl to return values instead of formulas # 'read_only=True' makes openpyxl much faster (fast enough that it # doesn't hurt to open the file once for each region). wb = openpyxl.load_workbook(xlsx_file, data_only=True, read_only=True) reg = wb[sheet][region] return [[cell.value for cell in row] for row in reg] # cache the lists of years and items # get the first (only) row of the 'B1:F1' region years = get_xlsx_region('test.xlsx', 'Sheet1', 'B1:E1')[0] # get the first (only) column of the 'A2:A6' region items = [r[0] for r in get_xlsx_region('test.xlsx', 'Sheet1', 'A2:A5')] def find_correct_cell(year, item): # find the indexes for 'COGS' and 2014 year_col = chr(ord('B') + years.index(year)) # only works in A:Z range item_row = 2 + items.index(item) cell_reference = year_col + str(item_row) return cell_reference print find_correct_cell(year=2014, item='COGS') # C3
How to generate this sequence using python Question: For example if q = 2, then i have to generate all sequence between [1,1] to [2,2]. if q = 3, then generate sequence between [1,1,1] to [3,3,3]. for q = 4, then generate sequence between [1,1,1,1] to [4,4,4,4], etc.. example of sequence . for q = 3 (1, 1, 1) (1, 1, 2) (1, 1, 3) (1, 2, 1) (1, 2, 2) (1, 2, 3) (1, 3, 1) (1, 3, 2) (1, 3, 3) (2, 1, 1) (2, 1, 2) (2, 1, 3) (2, 2, 1) (2, 2, 2) (2, 2, 3) (2, 3, 1) (2, 3, 2) (2, 3, 3) (3, 1, 1) (3, 1, 2) (3, 1, 3) (3, 2, 1) (3, 2, 2) (3, 2, 3) (3, 3, 1) (3, 3, 2) (3, 3, 3) i have tried this "[Python generating all nondecreasing sequences](http://stackoverflow.com/questions/31552101/python-generating-all- nondecreasing-sequences)" but not getting the required output. currently i am using this code, import itertools def generate(q): k = range(1, q+1) * q ola = set(i for i in itertools.permutations(k, q)) for i in sorted(ola): print i generate(3) i need another and good way to generate this sequence. Answer: Use itertools.product with the repeat parameter: q = 2 list(itertools.product(range(1, q + 1), repeat=q)) Out: [(1, 1), (1, 2), (2, 1), (2, 2)] q = 3 list(itertools.product(range(1, q + 1), repeat=q)) Out: [(1, 1, 1), (1, 1, 2), (1, 1, 3), (1, 2, 1), (1, 2, 2), ...
logged out users are accessing views which logged in users can only access in django Question: I am quite new to Django and came across this error. When ever I input a url directly ( '/accounts/admin2@outlook.com/'), django shows the user the view which only logged in users can see. I am using LoginRequiredMixin but it is not helping. My view file is : ` from django.shortcuts import render,redirect from django.views.generic import View from .forms import UserCreationForm,SignInForm from django.contrib.auth import login,logout,get_backends,authenticate from django.contrib.auth.decorators import login_required from django.contrib.auth.mixins import LoginRequiredMixin from django.utils.decorators import method_decorator from .backend import ClientAuthBackend from .models import MyUser class UserHomeView(LoginRequiredMixin,View): def get(self,request,email): print(request.user.is_authenticated()) return render(request,'user_home_view.html',{'title':'Home','user':MyUser.objects.get(email=email)}) class SignOutView(View): def get(self,request): logout(request) return redirect(to='/accounts/signin/') class SignInView(View): def get(self,request): return render(request,'log_in.html',{'title':'Sign In','form':SignInForm()}) def post(self,request): form = SignInForm(request.POST) if form.is_valid(): email = form.cleaned_data['email'] password = form.cleaned_data['password'] user = authenticate(username=email,password=password) if user is not None: login(request,user) return redirect(to='/accounts/' + str(email) + '/') else: form.add_error(None,"Couldn't authenticate your credentials !") return render(request,'log_in.html',{'title':'Sign In','form':form}) else: return render(request, 'log_in.html', {'title': 'Sign In', 'form': form}) class SignUpView(View): def get(self,request): return render(request,'sign_up.html',{'title':'Sign Up','form':UserCreationForm()}) def post(self,request): form = UserCreationForm(request.POST) try: if form.is_valid(): user = MyUser.objects.create_user(email=form.cleaned_data['email'],date_of_birth= form.cleaned_data['date_of_birth'],first_name=form.cleaned_data['first_name'],last_name= form.cleaned_data['last_name'],password=form.clean_password2()) return redirect(to='/accounts/signin') else: return render(request,'sign_up.html',{'title':'Sign Up','form':form}) except ValueError: form.add_error(None,"Passwords don't match !!!") return render(request, 'sign_up.html', {'title': 'Sign Up', 'form': form}) ` And that print statement in userhomeview also returns True each time a not logged in user accesses the url directly. My url file is : ` from django.conf.urls import url,include from django.contrib import admin from .views import SignUpView,SignInView,SignOutView,UserHomeView urlpatterns = [ url(r'^signup/$',SignUpView.as_view()), url(r'^signin/$',SignInView.as_view()), url(r'^signout/$',SignOutView.as_view()), url(r'^(?P<email>[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+)/',UserHomeView.as_view()), ] ` My settings file is : ` """ Django settings for django_3 project. Generated by 'django-admin startproject' using Django 1.9.8. For more information on this file, see https://docs.djangoproject.com/en/1.9/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.9/ref/settings/ """ import os # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'ac=6)v&jf(90%!op*$ttf29+qw_51n+(5#(jas&f&*(!=q310u' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = [] STATIC_URL = '/static/' STATIC_ROOT = '/Users/waqarahmed/Desktop/Python Projects/learning_django/django_3/assets' STATICFILES_DIRS = ( os.path.join( BASE_DIR,'static', ), ) AUTH_USER_MODEL = 'users.MyUser' AUTHENTICATION_BACKENDS = ('django.contrib.auth.backends.ModelBackend','users.backend.ClientAuthBackend') # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'users', ] MIDDLEWARE_CLASSES = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'django_3.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'templates')] , 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'django_3.wsgi.application' # Database # https://docs.djangoproject.com/en/1.9/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } # Password validation # https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/1.9/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.9/howto/static-files/ STATIC_URL = '/static/' ` My custom backend file is :` from .models import MyUser from django.contrib.auth.backends import ModelBackend class ClientAuthBackend(ModelBackend): def authenticate(self,username=None,password=None): try: user = MyUser.objects.get(email=username) if user.check_password(password): return user else: return None except MyUser.DoesNotExist: return None def get_user(self,email): try: user = MyUser.objects.get(email=email) return user except MyUser.DoesNotExist: return None ` And my model file is : ` from django.db import models from django.contrib.auth.models import ( BaseUserManager,AbstractBaseUser ) import time from django.utils.dateparse import parse_date class MyUserManager(BaseUserManager): def create_user(self, email, date_of_birth, first_name, last_name, password=None): if not email: raise ValueError('User must have an email id !') email = str(email).lower() date_of_birth = str(date_of_birth) user = self.model( email = self.normalize_email(email=email), date_of_birth = parse_date(date_of_birth), first_name = first_name, last_name = last_name, join_date = time.strftime('%Y-%m-%d'), ) user.set_password(password) user.save() return user def create_superuser(self, email, date_of_birth, first_name, last_name, password=None): if not email: raise ValueError('User must have an email id !') user = self.model( email = self.normalize_email(email=email), date_of_birth = date_of_birth, first_name = first_name, last_name = last_name, join_date = time.strftime('%Y-%m-%d'), ) user.is_admin = True user.set_password(password) user.save() return user class MyUser(AbstractBaseUser): email = models.EmailField(verbose_name='email address',max_length=255,unique=True) first_name = models.CharField(max_length=255) last_name = models.CharField(max_length=255) join_date = models.DateField(auto_now_add=True) date_of_birth = models.DateField() is_active = models.BooleanField(default=True) is_admin = models.BooleanField(default=False) objects = MyUserManager() USERNAME_FIELD = 'email' REQUIRED_FIELDS = ['first_name','last_name','date_of_birth'] def get_full_name(self): return self.email def get_short_name(self): return self.email def __str__(self): return self.email def has_perm(self, perm, obj=None): return True def has_module_perms(self, app_label): return True @property def is_staff(self): return self.is_admin ` Answer: Please correct following things first. * Whenever you are using class based view you must use request object via `self`. * Check auth use with the following`self.request.user.is_authenticated()`(It will return the what request does have) * If you want to use an automated way to check if a request is from an authenticated user you must use following middelwares `django.contrib.auth.middleware.AuthenticationMiddleware` `django.contrib.auth.middleware.RemoteUserMiddleware`(add thes two in settings installed_apps) with following decorator `from django.contrib.auth.decorators import login_required`. Add `@login_required` above the view.
OCaml open only certain values/types from module Question: Does OCaml have an equivalent (possibly involving a camlp4 directive) of `from module import value1, value2` in Python or `use Module qw[value1 value2];` in Perl ? I'd like to be able to write something like `open Ctypes (@->), string;;` or `open Ctypes ((@->), string);;` instead of let (@->) = Ctypes.(@->);; let string = Ctypes.string;; Answer: The closest is: let value1, value2 = Module.(value1, value2) For this very reason, open statements are most of the time evil (especially at the top level).
Python Pandas Dataframe Append Rows Question: I'm trying to append the data frame values as rows but its appending them as columns. I have 32 files that i would like to take the second column from (called dataset_code) and append it. But its creating 32 rows and 101 columns. I would like 1 column and 3232 rows. import pandas as pd import os source_directory = r'file_path' df_combined = pd.DataFrame(columns=["dataset_code"]) for file in os.listdir(source_directory): if file.endswith(".csv"): #Read the new CSV to a dataframe. df = pd.read_csv(source_directory + '\\' + file) df = df["dataset_code"] df_combined=df_combined.append(df) print(df_combined) Answer: You already have two perfectly good answers, but let me make a couple of recommendations. 1. If you only want the `dataset_code` column, tell `pd.read_csv` directly (`usecols=['dataset_code']`) instead of loading the whole file into memory only to subset the dataframe immediately. 2. Instead of appending to an initially-empty dataframe, collect a list of dataframes and concatenate them in one fell swoop at the end. Appending rows to a pandas `DataFrame` is costly (it has to create a whole new one), so your approach creates 65 `DataFrame`s: one at the beginning, one when reading each file, one when appending each of the latter -- maybe even 32 more, with the subsetting. The approach I am proposing only creates 33 of them, and is the common idiom for this kind of importing. Here is the code: import os import pandas as pd source_directory = r'file_path' dfs = [] for file in os.listdir(source_directory): if file.endswith(".csv"): df = pd.read_csv(os.join.path(source_directory, file), usecols=['dataset_code']) dfs.append(df) df_combined = pd.concat(dfs)
Why am I getting the error message name 'datestr' is not defined? Python 2.7 Question: So datestr() is supposed to convert a number into a date. But I keep getting this Name error message. Am I not loading the correct module. I have searched the Matplotlib documenation but do not see any specific module that must be imported. import matplotlib.pyplot as plt from matplotlib.dates import DateFormatter, WeekdayLocator,\ DayLocator, MONDAY from matplotlib.finance import quotes_historical_yahoo_ohlc, candlestick_ohlc import pandas as pd import datetime import pandas.io.data as web from datetime import date import matplotlib date = 731613 print datestr(date) #NameError: name 'datestr' is not defined Answer: It looks like you want the function `mathplotlib.dates.num2date()`. From there you can convert to a string with `str()` or `strftime()`: >>> from matplotlib.dates import num2date >>> num2date(731613) datetime.datetime(2004, 2, 2, 0, 0, tzinfo=<matplotlib.dates._UTC object at 0x7f64861fa5d0>) >>> print(num2date(731613)) 2004-02-02 00:00:00+00:00 >>> str(num2date(731613)) '2004-02-02 00:00:00+00:00'
python script that imports matplotlib succeeds butfrozen binary of script fails Question: My script needed to import `numpy`, `sklearn`, and `matplotlib` but I couldn't install sklearn. A very helpful response to my question <http://http://stackoverflow.com/questions/38733220/difference-between-scikit- learn-and-sklearn> explained that I needed to reinstall numpy. Using pip to update numpy failed because OS X 10.11 SIP prevented uninstalling the current numpy. The very helpful answer to a question about pip and SIP by mfripp <http://http://apple.stackexchange.com/questions/209572/how-to-use-pip-after- the-os-x-el-capitan-upgrade> provided a detailed solution to the problem. I followed those instruction exactly and used `pip` to reinstall `numpy`, `matplotlib`, `scipy` and `sklearn` for all users. When I ran my completed script using the command > python DistMatPlot.py Random10A.matrix Random10A.pdf the script ran perfectly, writing all expected output files. However, I always saw: > "/Library/Python/2.7/site-packages/matplotlib/font_manager.py:273: > UserWarning: Matplotlib is building the font cache using fc-list. This may > take a moment. warnings.warn('Matplotlib is building the font cache using > fc-list. This may take a moment.')" which I had never seen with other matplotlib scripts before updating numpy, matplotlib, etc. The 2 second delay was only mildly annoying. I compiled a frozen binary using pyinstaller and during the compiling I got several messages similar to that above. The resulting frozen binary run failed with the command: > ./DistMatPlot Random10A.matrix Random10A.pdf produced the following: > > /var/folders/8x/7_zp_33h8xj6td0059b72p9h0000gp/T/_MEIhIysTV/matplotlib/font_manager.py:273: > UserWarning: Matplotlib is building the font cache using fc-list. This may > take a moment. Traceback (most recent call last): File "", line 13, in File > "/Library/Python/2.7/site- > packages/PyInstaller/loader/pyimod03_importers.py", line 389, in load_module > exec(bytecode, module.**dict**) File "matplotlib/pyplot.py", line 114, in > File "matplotlib/backends/**init**.py", line 32, in pylab_setup File > "/Library/Python/2.7/site- > packages/PyInstaller/loader/pyimod03_importers.py", line 389, in load_module > exec(bytecode, module.**dict**) File > "matplotlib/backends/backend_macosx.py", line 24, in File > "/Library/Python/2.7/site- > packages/PyInstaller/loader/pyimod03_importers.py", line 546, in load_module > module = imp.load_module(fullname, fp, filename, ext_tuple) RuntimeError: > Python is not installed as a framework. The Mac OS X backend will not be > able to function correctly if Python is not installed as a framework. See > the Python documentation for more information on installing Python as a > framework on Mac OS X. Please either reinstall Python as a framework, or try > one of the other backends. If you are Working with Matplotlib in a virtual > enviroment see 'Working with Matplotlib in Virtual environments' in the > Matplotlib FAQ DistMatPlot returned -1 I have looked at similar questions and tried their suggested solutions to no avail. (1) Why does matplotlib need to rebuild a font cache each time it is run? (2) Why does the frozen binary fail when the script itself succeeds? Do I need some additional option other than -F when running pyinstaller? Answer: The issue with the frozen binary turned out to be "RuntimeError: Python is not installed as a framework." Several posts discussed that issue and suggested adding these two lines before "import matplotlib.pyplot as plt": import matplotlib matplotlib.use('TkAgg') That did not work, but this slight modification did work: import matplotlib matplotlib.use('Agg') I suspect that 'Agg" may be specific to either OS X or to the version of python that is included with OS X 10.11
Run python-rq worker process on application start Question: I hosted my Django app on Heroku but due to few limitations I moved from Heroku to cloud based server. I followed this [tutorial](https://devcenter.heroku.com/articles/python-rq) on running background tasks in Python. Everything is running fine except that I have to manually run `python worker.py` to start worker process. On Heroku we can use Procfile to run processes when app starts but now I am on a cloud based server running ubuntu 14.04. So what is the alternative to Procfile? **worker.py** import os import redis from rq import Worker, Queue, Connection listen = ['high', 'default', 'low'] redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379') conn = redis.from_url(redis_url) if __name__ == '__main__': with Connection(conn): worker = Worker(map(Queue, listen)) worker.work() Answer: I ended up using upstart. I created a new config file `rqworker.py` using `sudo nano \etc\init\rqworker.conf` with the following code: description "Job queues for directory" start on runlevel [2345] stop on runlevel [!2345] respawn setuid myuser setgid www-data exec python3.5 worker.py Then I just started the service `sudo service rqworker start` and now my worker processes are running in the background.
Get the Key and Value of a dictionary while using random Question: I'm new to python and I'm trying to make a simple "Game" I'm trying to get the health of the monsters but only the names come up. I can do `(random.choice(list(Monster.health.keys())))` and `(random.choice(list(Monster.health.Values())))` to store both the name and health in a variable but I'm using `random` so the name might say 'goblin' but the health may say '50' which is wrong. How can I store the variable in `random_mon_name_health` and access both 'keys' and 'value' class Monster: health = {'goblin': 15, 'giant': 50} class EncounterM(Monster): random_mon_name_health = '' def basic_monster(self): import random self.random_mon_name_health = (random.choice(list(Monster.health.keys()))) def test(self): print(self.random_mon_name_health) beta_user = EncounterM() beta_user.basic_monster() beta_user.test() # This is only a section of the game that I'm having trouble with Answer: You only need to call `choice` once, like so: class Monster: health = {'goblin': 15, 'giant': 50} class EncounterM(Monster): def __init__(self): self.random_mon_name = '' self.random_mon_health = 0 def basic_monster(self): import random self.random_mon_name, self.random_mon_health = \ random.choice(list(Monster.health.items())) def test(self): print(self.random_mon_name, self.random_mon_health) beta_user = EncounterM() beta_user.basic_monster() beta_user.test()
(Python code not working in loop) pandas.DataFrame.apply() not working in loop Question: I have a piece of code which works fine alone, but when I put it in loop (or use `df.apply()` method), it does not work. The code is: import pandas as pd from functools import partial datadf=pd.DataFrame(data,columns=['X1','X2']) for i in datadf.index.values.tolist(): row=datadf.loc[i] x1=row['X1'] x2=row['X2'] set1=set([x1,x2]) links=data2[data2['Xset']==set1] df1=pd.DataFrame(range(1,11),columns=['year']) def idlist1(row,var1): year=row['year'] id1a=links[(links['xx1']==var1) & (links['year']==year)] id1a=id1a['id1'].values.tolist() id1b=links[(links['xx2']==var1) & (links['year']==year)] id1b=id1b['id2'].values.tolist() id1=list(set(id1a+id1b)) return id1 df1['id1a']=df1.apply(partial(idlist1,var1=x1),axis=1) #...(do other stuffs to return a value using "df1") del df1 Here `data2` is another dataframe. Here I'm trying to match the values of `(x1,x2)` to `data2`. The code works fine outside the loop by which I mean, I specify `(x1,x2)` directly. But when I put the code in the loop or use `df.apply`, I always get the error message ValueError: could not broadcast input array from shape (0) into shape (1) I don't understand why. Could someone help? Thanks! (BTW, the version of `pandas` is `0.18.0`.) The full error message is: File "<ipython-input-229-541c0f3a4d2f>", line 19, in <module> df1['id1a']=df1.apply(partial(idlist1,var1=x1),axis=1) File "/anaconda2/lib/python2.7/site-packages/pandas/core/frame.py", line 4042, in apply return self._apply_standard(f, axis, reduce=reduce) File "/anaconda2/lib/python2.7/site-packages/pandas/core/frame.py", line 4155, in _apply_standard result = self._constructor(data=results, index=index) File "/anaconda2/lib/python2.7/site-packages/pandas/core/frame.py", line 223, in __init__ mgr = self._init_dict(data, index, columns, dtype=dtype) File "/anaconda2/lib/python2.7/site-packages/pandas/core/frame.py", line 359, in _init_dict return _arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype) File "/anaconda2/lib/python2.7/site-packages/pandas/core/frame.py", line 5250, in _arrays_to_mgr return create_block_manager_from_arrays(arrays, arr_names, axes) File "/anaconda2/lib/python2.7/site-packages/pandas/core/internals.py", line 3933, in create_block_manager_from_arrays construction_error(len(arrays), arrays[0].shape, axes, e) File "/anaconda2/lib/python2.7/site-packages/pandas/core/internals.py", line 3895, in construction_error raise e ValueError: could not broadcast input array from shape (0) into shape (1) **Update** : I found out the `df.apply` method somehow is not compatible with the loop, so I converted all the `apply`'s in the loop to loops, and the code works fine now. Although I "sort of" solved the issue, but I'm still very confused about why this would happen. If anyone knows why, I'd really appreciate the answer. Thanks! Answer: Probably because there're multiple definitions of `row`, one as an argument of the function `def idlist1(row,var1):` and one defined as `row=datadf.loc[i]`, you can try to rename one and see if it helps.
How to remove all special characters except spaces and dashes from a Python string? Question: I want to strip all special characters from a Python string, except dashes and spaces. Is this correct? import re my_string = "Web's GReat thing-ok" pattern = re.compile('[^A-Za-z0-9 -]') new_string = pattern.sub('',my_string) new_string >> 'Webs GReat thing-ok' # then make it lowercase and replace spaces with underscores # new_string = new_string.lower().replace (" ", "_") # new_string # >> 'webs_great_thing-ok' As shown, I ultimately want to replace the spaces with underscores after removing the other special characters, but figured I would do it in stages. Is there a Pythonic way to do it all in one fell swoop? For context, I am using this input for MongoDB collection names, so want the constraint of the final string to be: alphanumeric with dashes and underscores allowed. Answer: You are actually trying to "slugify" your string. If you don't mind using a 3rd party (and a Python 2 specific) library you can use `slugify` (`pip install slugify`): import slugify string = "Web's GReat thing-ok" print slugify.slugify(string) >> 'webs_great_thing-ok' You can implement it yourself. All of `slugify`'s code is import re import unicodedata def slugify(string): return re.sub(r'[-\s]+', '-', unicode( re.sub(r'[^\w\s-]', '', unicodedata.normalize('NFKD', string) .encode('ascii', 'ignore')) .strip() .lower()) Note that this is Python 2 specific. Going back to your example, You can make it a one-liner. Whether it is Pythonic enough is up to you to decide (note the shortened range `A-z` instead of `A-Za-z`): import re my_string = "Web's GReat thing-ok" new_string = re.sub('[^A-z0-9 -]', '', my_string).lower().replace(" ", "_") **UPDATE** There seems to be more robust and Python 3 compatible "slugify" library [here](https://pypi.python.org/pypi/awesome-slugify/1.6.5).
GAE python: how to use delete_serving_url Question: 1. First I put image to storage: import cloudstorage as gcs ... path = '/bucket/folder/image.jpg' with gcs.open(path, 'w') as f: f.write(data) 2. Then I get serving url: url = images.get_serving_url(None, filename='/gs{}'.format(self.path), secure_url=True) Serving url generally works as expected, the thing is I'm not using blob_key, only filename (path in storage). 3. I wonder how to delete serving_url now, since sdk method only accepts blob_key def delete_serving_url(blob_key, rpc=None): """Delete a serving url that was created for a blob_key using get_serving_url. Args: blob_key: BlobKey, BlobInfo, str, or unicode representation of BlobKey of blob that has an existing URL to delete. rpc: Optional UserRPC object. Raises: BlobKeyRequiredError: when no blobkey was specified. InvalidBlobKeyError: the blob_key supplied was invalid. Error: There was a generic error deleting the serving url. """ <https://cloud.google.com/appengine/docs/python/refdocs/google.appengine.api.images#google.appengine.api.images.delete_serving_url> Answer: The [Using the Blobstore API with Google Cloud Storage](https://cloud.google.com/appengine/docs/python/blobstore/#Python_Using_the_Blobstore_API_with_Google_Cloud_Storage) example shows how to obtain an equivalent blob_key for GCS: blob_key = CreateFile(main.BUCKET + '/blobstore_serving_demo') From that link: > **Note:** Once you obtain a **blobKey** for the Google Cloud Storage object, > you can pass it around, serialize it, and otherwise use it interchangeably > anywhere you can use a **blobKey** for objects stored in Blobstore. This > allows for usage where an app stores some data in blobstore and some in > Google Cloud Storage, but treats the data otherwise identically by the rest > of the app. (However, **BlobInfo** objects are not available for Google > Cloud Storage objects.) So you should be able to generate a blobKey for your file and call `get_serving_url` and `delete_serving_url` with it. You could also use GCS object premissions to prevent access to the file, see [Setting object permissions and metadata](https://cloud.google.com/storage/docs/cloud-console#_permissions).
Reduce running time in Texture analysis using GLCM [Python] Question: I am working on 6641x2720 image to generate its feature images (Haralick features like contrast, second moment etc) using a moving GLCM(Grey level Co- occurrence matrix ) window. But it takes forever to run. **The code works fine, as I have tested it on smaller images.** But, I need to make it run faster. Reducing the dimensions to 25% (1661x680) it takes **30 minutes** to run. How can I make it run faster ? Here's the code: from skimage.feature import greycomatrix, greycoprops import matplotlib.pyplot as plt import numpy as np from PIL import Image import time start_time = time.time() img = Image.open('/home/student/python/test50.jpg').convert('L') y=np.asarray(img, dtype=np.uint8) #plt.imshow(y, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255) contrast = np.zeros((y.shape[0], y.shape[1]), dtype = float) for i in range(0,y.shape[0]): for j in range(0,y.shape[1]): if i < 2 or i > (y.shape[0]-3) or j < 2 or j > (y.shape[1]-3): continue else: s = y[(i-2):(i+3), (j-2):(j+3)] glcm = greycomatrix(s, [1], [0], symmetric = True, normed = True ) contrast[i,j] = greycoprops(glcm, 'contrast') print("--- %s seconds ---" % (time.time() - start_time)) plt.imshow(contrast, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255) Answer: Fill a GLCM is a linear operation: you just go through all the pixels on your image/window and you fill the matching matrix case. Your issue is that you perform the operation for each pixel, and not just for an image. So in your case, if the image dimensions are Width x Height and the window dimensions are NxN, then the total complexity is Width x Height x (NxN + FeaturesComplexity), which is really bad. There is a much faster solution, but it's trickier to implement. The goal is to reduce the matrix filling operations. The idea is to work row by row with a Forward Front and a Backward Front (principle already use to get fast mathematical morphology operators, see [here](https://hal.archives- ouvertes.fr/hal-00692897/document) and [here](https://www.lrde.epita.fr/~theo/papers/geraud.2010.book.pdf)). When you fill the matrix for two consecutive pixels, you reuse most of the pixels, in fact only the ones on the left and right are different, so the backward front and forward front respectively. Here is an illustration for a GLCM window of dimensions 3x3: > x1 x2 x3 x4 > > x5 p1 p2 x6 > > x7 x8 x9 x10 When the window is centered on p1, you use the pixels: x1, x2, x3, x5, p2, x7, x8, x9. When the window is centered on p2, you use the pixels: x2, x3, 4, p1, x6, x8, x9, x10. So for p1, you use x1, x5 and x7, but you don't use them for p2, but all the other pixels are the same. The idea of the algorithm is to compute the matrix normally for p1, but when you move to p2, you remove the backward front (x1, x2, x5) and you add the forward front (x4, x6, x10). This reduces dramatically the computation time (linear instead of quadratic for mathematical morphology operations). Here is the algorithm: 1. For each row: 2. \----- Fill the matrix (as usually) for the first pixel in the row and you compute the features 3. \----- For each of the following pixels 4. \----- ----- Add the forward front (new pixels in the window) 5. \----- ----- Remove the backward front (pixels no longer in the window) 6. \----- ----- Compute the features
Find channel name of message with SlackClient Question: I am trying to print the channel a message was posted to in slack with the python SlackClient. After running this code I only get an ID and not the channel name. import time import os from slackclient import SlackClient BOT_TOKEN = os.environ.get('SLACK_BOT_TOKEN') def main(): # Creates a slackclient instance with bots token sc = SlackClient(BOT_TOKEN) #Connect to slack if sc.rtm_connect(): print "connected" while True: # Read latest messages for slack_message in sc.rtm_read(): message = slack_message.get("text") print message channel = slack_message.get("channel") print channels time.sleep(1) if __name__ == '__main__': main() This is the output: test U1K78788H Answer: I'm not sure what you are outputting. Shouldn't "channels" be "channel" ? Also, I think this output is the "user" field. The "Channel" field should yield an id starting with C or G ([doc](https://api.slack.com/events/message)). { "type": "message", "channel": "C2147483705", "user": "U2147483697", "text": "Hello world", "ts": "1355517523.000005" } Then, use either the python client to retrieve the channel name, if it stores it (I don't know the Python client), or use the web API method [channels.info](https://api.slack.com/methods/channels.info) to retrieve the channel name.
IndexError: index out of range in python Question: i am tying to solve the following question on a competitive coding website where i have to convert '->' to '.' only in the code section but not in the comments. <https://www.hackerearth.com/problem/algorithm/compiler-version-2/> i have tried to write a solution but everytime i run it it gives me IndexError message. Some help is much appreciated. Below is my solution import copy temp_list = [] while True: string = input() if string != "": temp_list.append(string) string = None else: break for i in range(len(temp_list)): j = 0 while j <= (len(temp_list[i]) - 2): if string[i][j] == '-' and string[i][j + 1] == '>': #print("Hello WOrld") temp_string = string[i][:j] + '.' + string[i][j + 2:] string[i] = copy.deepcopy(temp_string) elif string[i][j] == '/' and string[i][j + 1] == '/': #print("Break") break else: #print(j) j += 1 for i in temp_list: print(i) Answer: 1. `if string` is the same as `if string != ""` 2. `temp_list` is a list so you can loop over it in a more pythonic way `for i in temp_list` 3. `string` is a variable of type `str` so you cann't index it like this: `string[i][j]` (i guess you wanted to use `temp_list` in those cases) Something like this below should work: import copy temp_list = [] while True: string = raw_input() if string: temp_list.append(string) string = None else: break for i in temp_list: j = 0 while j <= (len(temp_list[i]) - 2): if temp_list[i][j] == '-' and temp_list[i][j + 1] == '>': #print("Hello WOrld") temp_string = temp_list[i][:j] + '.' + temp_list[i][j + 2:] temp_list[i] = copy.deepcopy(temp_string) elif temp_list[i][j] == '/' and temp_list[i][j + 1] == '/': #print("Break") break else: #print(j) j += 1 for i in temp_list: print(i)
In Python, use a method both as instance and class method Question: I'm writing a program which plays Tic Tac Toe and has various versions of `ComputerPlayer`, such as the `RandomPlayer` and `THandPlayer`: class RandomPlayer(ComputerPlayer): def __init__(self, mark): super(RandomPlayer, self).__init__(mark=mark) def get_move(self, board): moves = board.available_moves() if moves: # If "moves" is not an empty list (as it would be if cat's game were reached) return moves[np.random.choice(len(moves))] # Apply random select to the index, as otherwise it will be seen as a 2D array class THandPlayer(ComputerPlayer): def __init__(self, mark): super(THandPlayer, self).__init__(mark=mark) def get_move(self, board): moves = board.available_moves() if moves: # If "moves" is not an empty list (as it would be if cat's game were reached) for move in moves: if board.get_next_board(move, self.mark).winner() == self.mark: # Make winning move (if possible) return move elif board.get_next_board(move, self.opponent_mark).winner() == self.opponent_mark: # Block opponent's winning move return move else: # return moves[np.random.choice(len(moves))] # This is a repetition of the code in RandomPlayer and is not DRY randomplayer = RandomPlayer(mark=self.mark) return randomplayer.get_move(board) # return RandomPlayer.get_move(board) # This returns an error as "get_move" is an instance method The `THandPlayer` also selects moves at random if no winning move can be made or an opponent's winning move blocked. Right now I am doing this by creating an instance of `RandomPlayer` and calling `get_move` on it. This could be made more succinct, however, if `get_move` could be made such that it can be interpreted both as a class method and an instance method. Is this possible? **EDIT** To simplify the question, suppose we have two classes, `RandomPlayer` and `OtherPlayer`, both which have an instance method `get_move`: import numpy as np class RandomPlayer: def get_move(self, arr): return np.random.choice(arr) class OtherPlayer: def get_move(self, arr): if max(arr) > 5: return max(arr) else: randomplayer=RandomPlayer() return randomplayer.get_move(arr) arr = np.arange(4) otherplayer = OtherPlayer() print otherplayer.get_move(arr) Is it possible to use `RandomPlayer`'s `get_move` method in `OtherPlayer` without creating an instance of `RandomPlayer`? Answer: It sounds like you're looking for a [`staticmethod`](https://docs.python.org/3.5/library/functions.html#staticmethod), which has access to neither `cls` nor `self` but can be accessed via either: >>> class Foo: ... @staticmethod ... def bar(): ... print('baz') ... >>> Foo.bar() baz >>> Foo().bar() baz
Get counts of specific values within a nested Python dictionary Question: I have a giant nested dictionary (6k records) that I need to sort and count based on two values within my second dict. item_dict = { 64762.0: { 'In Sheet': 'No', 'Paid': Y, 'Region': "AMER'", 'Matrix Position': 'Check' }, 130301.0: { 'Paid': N, 'Region': "AMER'", 'Matrix Position': 'Calculate' }, 13111.0: { 'In Sheet': 'Yes', 'Region': "EMEA'", 'Matrix Position': 'Check' }, 130321.0: { 'Matrix Position': 'Enhance', 'In Sheet': 'No', 'Paid': Y, 'Region': "JP'" } } So, I need to get counts between regions and Matrix positions. So, I'd wind up with: Amer and Calculate: 1 EMEA and Calculate: 0 EMEA and Check= 1 AMER and Check= 1 EMEA and Enhance= 0 JP and Check=0 Et cetera. The thing is, the full data set has 5 regions with 4 potential matrix positions. Is the best way to do this by using a for loop to search for each potential combination, then adding that to its own list? AmerCalculate=[] for row in item_dict: if item_dict[row]['Region'] == "AMER'" and item_dict[row]['Matrix Position'] == "Calculate": AmerCalculate.append(row) Then, to get the lengths, do len(AmerCalculate)? Is there a more elegant way of doing this so I don't have to manually type out all 20 combinations? Answer: Use another dictionary to couple that data set together, from there you can generate the output you're looking for: def dict_counter(dict_arg): d = {'AMER':[],'EMEA':[],'JP':[]} # Regions as keys. for int_key in dict_arg: sub_dict = dict_arg[int_key] for key, value in sub_dict.items(): if value in d: d[value].append(sub_dict['Matrix Position']) return d **Sample Output:** >>> item_dict= {12.0: {'In Sheet': 'No', 'Paid': 'Y', 'Region': "AMER", 'Matrix Position': 'Enhance'},1232.0: {'In Sheet': 'No', 'Paid': 'Y', 'Region': "AMER", 'Matrix Position': 'Check'}, 64762.0: {'In Sheet': 'No', 'Paid': 'Y', 'Region': "AMER", 'Matrix Position': 'Check'}, 130301.0: {'Paid': 'N', 'Region': "AMER", 'Matrix Position': 'Calculate'}, 13111.0: {'In Sheet': 'Yes', 'Region': "EMEA", 'Matrix Position': 'Check'}, 130321.0: {'Matrix Position': 'Enhance','In Sheet': 'No', 'Paid': 'Y', 'Region': "JP"}} >>> print dict_counter(item_dict) {'JP': ['Enhance'], 'AMER': ['Check', 'Calculate'], 'EMEA': ['Check']} We now have the _basis_ to generate the report you're looking for. We can use `Counter` to get a count of all position instances. Here's an example of how we could go about checking for counts in the `list` mapped value. from collections import Counter d = dict_counter(item_dict) for k, v in d.items(): for i, j in Counter(v).items(): print k,'and',i,'=',j >>> JP and Enhance = 1 >>> AMER and Enhance = 1 >>> AMER and Check = 2 >>> AMER and Calculate = 1 >>> EMEA and Check = 1
Using python multiprocess outside the main script Question: According to the [docs](https://docs.python.org/2/library/multiprocessing.html#multiprocessing- programming) of python's multiprocess the spawning of process needs to be inside of the `if __name__ == '__main__':` clause to prevent spawning infinite processes. My question, is it possible to use multiprocess inside of an import? Something along those lines of this: let's say I have this py which is the main executed file: import foo def main(): foo.run_multiprocess() if __name__ =='__main__': main() and the foo.py file which is imported: def run_multiprocess(number_to_check): if number_to_check == 5: print(number_to_check) if __name__ == '__main__': list_to_check = {1,2,3,4,5,6,7} pool = Pool(processes=4) pool.map(process_image, list_to_check) Obviusly this won't work because the code inside the if statment in foo.py won't run. Is there a way to make it work though? Answer: Multiprocessing doesn't have to run within the `__main__` block, `__main__` block is only used if the file is ran via `python filename.py`. So if you did: `m1.py`: from multiprocessing import Pool def f(x): return x^2 def f2(): p = Pool(5) p.map(f, [1,2,3,4,5]) `m2.py`: from m1 import f2 def __main__(): # this is not required, but a good practice f2() # this would run multiprocessing code and then call `python m2.py`, your code would run correctly, with mp.
Unable to login with python selenium error:NoSuchElementException error Question: I have tried locating the submit button by id and xpath but none of them worked and checked in the page source ,the id is same.I have no idea why this is happening even though I am giving the correct Id or xpath **URL :** <https://moodle.niituniversity.in/moodle/login/index.php> from pyvirtualdisplay import Display from selenium import webdriver from selenium.webdriver.common.keys import Keys display = Display(visible=0, size=(1024, 768)) display.start() driver = webdriver.Firefox() #driver.set_preference("browser.startup.homepage_override.mstone", "ignore") driver.get("https://moodle.niituniversity.in/moodle/login/index.php") username = driver.find_element_by_name("username") username.clear() username.send_keys("User123") username.send_keys(Keys.RETURN) password = driver.find_element_by_name("password") password.clear() password.send_keys("pass123") password.send_keys(Keys.RETURN) password = driver.find_element_by_xpath(".//*[@id='loginbtn']").click() driver.get("https://moodle.niituniversity.in/moodle") assert "user" in driver.page_source driver.close() display.stop() > .NoSuchElementException: Message: Unable to locate element: > {"method":"xpath","selector":".//*[@id='loginbtn']"} Answer: Might be possible this is timing issue, you should implement `WebDriverWait` to wait until button present on page as below :- from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC element = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, "loginbtn"))) element.click() Full code : from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver.get("https://moodle.niituniversity.in/moodle/login/index.php") username = driver.find_element_by_name("username") username.clear() username.send_keys("User123") password = driver.find_element_by_name("password") password.clear() password.send_keys("pass123") button = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, "loginbtn"))) button.click()
How to show image sequences as video clip on python Question: I have a fold with a list of images which are frames extracting from a video clip. I wonder how I can sequentially play the image sequence just as the clip. (Hmm, FPS doesn't matter.) Checking `PIL` module and `skimage` module, seems that no way I can do it on Python, unless I convert the `JPG` sequence into `GIF` format. Answer: One way would be to use the animation capability of the matplotlib library. Here is an example copy/pasted from the [online documentation](http://matplotlib.org/1.4.1/examples/animation/dynamic_image.html): #!/usr/bin/env python """ An animated image """ import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation def f(x, y): return np.sin(x) + np.cos(y) x = np.linspace(0, 2 * np.pi, 120) y = np.linspace(0, 2 * np.pi, 100).reshape(-1, 1) fig = plt.figure() im = plt.imshow(f(x, y)) def updatefig(*args): global x,y x += np.pi / 15. y += np.pi / 20. im.set_array(f(x,y)) return im, ani = animation.FuncAnimation(fig, updatefig, interval=50, blit=True) plt.show() Few things to note are: * each image as to be converted to a numpy array * use `im.set_array` (in the `updatefig` function) to load the next image
Find closest point in Pandas DataFrames Question: I am quite new to Python. I have the following table in Postgres. These are Polygon values with four coordinates with same `Id` with `ZONE` name I have stored this data in Python dataframe called `df1` Id Order Lat Lon Zone 00001 1 50.6373473 3.075029928 A 00001 2 50.63740441 3.075068636 A 00001 3 50.63744285 3.074951754 A 00001 4 50.63737839 3.074913884 A 00002 1 50.6376054 3.0750528 B 00002 2 50.6375896 3.0751209 B 00002 3 50.6374239 3.0750246 B 00002 4 50.6374404 3.0749554 B I have Json data with `Lon` and `Lat` values and I have stored them is python dataframe called `df2`. Lat Lon 50.6375524099 3.07507914474 50.6375714407 3.07508201591 My task is to compare `df2` `Lat` and `Lon` values with four coordinates of each zone in `df1` to extract the zone name and add it to `df2`. For instance `(50.637552409 3.07507914474)` belongs to `Zone B`. #This is ID with Zone df1 = pd.read_sql_query("""SELECT * from "zmap" """,con=engine) #This is with lat,lon values df2 = pd.read_sql_query("""SELECT * from "E1" """,con=engine) df2['latlon'] = zip(df2.lat, df2.lon) zones = [ ["A", [[50.637347297, 3.075029928], [50.637404408, 3.075068636], [50.637442847, 3.074951754],[50.637378390, 3.074913884]]]] for i in range(0, len(zones)): # for each zone points X = mplPath.Path(np.array(zones[i][1])) # find if points are Zones Y= X.contains_points(df2.latlon.values.tolist()) # Label points that are in the current zone df2[Y, 'zone'] = zones[i][0] Currently I have done it manually for Zone 'A'. I need to generate the "Zones" for the coordinates in df2. Answer: This sounds like a good use case for [scipy cdist](http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html), also discussed [here](http://codereview.stackexchange.com/questions/28207/finding-the- closest-point-to-a-list-of-points). import pandas as pd from scipy.spatial.distance import cdist data1 = {'Lat': pd.Series([50.6373473,50.63740441,50.63744285,50.63737839,50.6376054,50.6375896,50.6374239,50.6374404]), 'Lon': pd.Series([3.075029928,3.075068636,3.074951754,3.074913884,3.0750528,3.0751209,3.0750246,3.0749554]), 'Zone': pd.Series(['A','A','A','A','B','B','B','B'])} data2 = {'Lat': pd.Series([50.6375524099,50.6375714407]), 'Lon': pd.Series([3.07507914474,3.07508201591])} def closest_point(point, points): """ Find closest point from a list of points. """ return points[cdist([point], points).argmin()] def match_value(df, col1, x, col2): """ Match value x from col1 row to value in col2. """ return df[df[col1] == x][col2].values[0] df1 = pd.DataFrame(data1) df2 = pd.DataFrame(data2) df1['point'] = [(x, y) for x,y in zip(df1['Lat'], df1['Lon'])] df2['point'] = [(x, y) for x,y in zip(df2['Lat'], df2['Lon'])] df2['closest'] = [closest_point(x, list(df1['point'])) for x in df2['point']] df2['zone'] = [match_value(df1, 'point', x, 'Zone') for x in df2['closest']] print(df2) # Lat Lon point closest zone # 0 50.637552 3.075079 (50.6375524099, 3.07507914474) (50.6375896, 3.0751209) B # 1 50.637571 3.075082 (50.6375714407, 3.07508201591) (50.6375896, 3.0751209) B
Segmentation fault when Connect to MySQL in python 3.5.1 on AIX 6 Question: I tried to do following tasks: 1 Compile Python 3.5.1 source use GCC 4.2.0 on AIX 6.0; 2 Use Python 3.5.1 do my work including connect and uses mysql databae; once I tried this tasks,I can compile python 3.5.1 source successful,and doing something well except connect and uses database; $/usr/local/bin/python3.5 Python 3.5.1 (default, Aug 12 2016, 15:48:31) [GCC 4.2.0] on aix6 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> dir(sys.path) ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', 'append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] and then I tried install PyMySQL-0.7.6 witch I was working well on Linux and Windows,and it install successful,but unfortunately,when I tried to use it to connect to MySQL database,it gave me 'Segmentation fault(coredump)' error and it abort python automatic; >>> import pymysql connection = pymysql.connect(host='150.17.31.113',user='sywu',password='sywu',db='sydb',charset='utf8mb4',cursorclass=pymysql.cursors.DictCursor);>>> connection = pymysql.connect(host='150.17.31.113',user='sywu',password='sywu',db='sydb',charset='utf8mb4',cursorclass=pymysql.cursors.DictCursor); Segmentation fault(coredump) $ again and again,it always like this,I had read core file,it contains unhuman readable contents,and I can't figure out the problem was, since I can't do it with pymysql,I tried install mysql-connector-python 2.1.3,it install successful,but I got 'Illegal instruction(coredump)' error and it abort python automatic, Type "help", "copyright", "credits" or "license" for more information. >>> import mysql.connector >>> cnx=mysql.connector.connect(user='sywu',password='sywu',host='150.17.31.113',database='sydb') Illegal instruction(coredump) $ does anyone to do this on aix successful,any help? Answer: I haven't used AIX but from the code snippet, I could figure that from <https://github.com/PyMySQL/PyMySQL/blob/master/example.py>, the parameter is `passwd` not `password`. import pymysql conn = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='mysql') Also, in the same library: <https://github.com/PyMySQL/PyMySQL>, look at the example. May be it can help you out.
Python getopt optional command line parameters Question: I finished writing a script in python, and now stuck in the interface, which requires getting few options from user but not sure what is the best way to get optional arguments.. The code for that is as below... def getOptions(argv): try: opts,args = getopt.getopt(argv, "hi:c:d:m", ["ifile=", "add=", "delete"]) except getopt.GetoptError: printUsage() sys.exit(2) for opt, arg in opts: if opt in ("-h", "--help"): print ("test -m <make> [src] [dst]\n") print ("test -i <install>[filename] \n") ..... sys.exit() if opt in ( "-m", "--make"): make(arg) sys.exit() if opt in ("-i","--install"): install(arg) sys.exit() ... # few more options else: assert False, "unhandled option" My question is that how can i leave out the argument (like use a default optional path for arg) and if that is not provided, get from user ?? Currently i've to provide ./test -i how can i leave out file name and call like ./test -i Secondly, how can i combine two options (without any argument for example, ./test -i -m Answer: From: <https://docs.python.org/2/library/getopt.html> Getopts does not support optional arguments. Can you trying writing your code using argparse? Following is an example: import argparse parser = argparse.ArgumentParser(description='python cli') parser.add_argument("-m", "--make", help="execute make", required=True) parser.add_argument("-i", "--install", help="execute install", required=True) # parse input arguments args = parser.parse_args() makeVal = args.make shouldInstall = args.install if makeVal : make(makeVal) ... Reference: <https://docs.python.org/2.7/library/argparse.html>
How to decode a mime part of a message and get a **unicode** string in Python 2.7? Question: Here is a method which tries to get the html part of an email message: from __future__ import absolute_import, division, unicode_literals, print_function import email html_mail_quoted_printable=b'''Subject: =?ISO-8859-1?Q?WG=3A_Wasenstra=DFe_84_in_32052_Hold_Stau?= MIME-Version: 1.0 Content-type: multipart/mixed; Boundary="0__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253" --0__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253 Content-type: multipart/alternative; Boundary="1__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253" --1__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253 Content-type: text/plain; charset=ISO-8859-1 Content-transfer-encoding: quoted-printable Freundliche Gr=FC=DFe --1__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253 Content-type: text/html; charset=ISO-8859-1 Content-Disposition: inline Content-transfer-encoding: quoted-printable <html><body> Freundliche Gr=FC=DFe </body></html> --1__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253-- --0__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253-- ''' def get_html_part(msg): for part in msg.walk(): if part.get_content_type() == 'text/html': return part.get_payload(decode=True) msg=email.message_from_string(html_mail_quoted_printable) html=get_html_part(msg) print(type(html)) print(html) Output: <type 'str'> <html><body> Freundliche Gr��e </body></html> Unfortunately I get a byte string. I would like to have unicode string. According to [this answer](http://stackoverflow.com/questions/27550567/python- email-payload-decoding) `msg.get_payload(decode=True)` should do the magic. But it does not in this case. How to decode a mime part of a message and get a **unicode** string in Python 2.7? Answer: > Unfortunately I get a byte string. I would like to have unicode string. The `decode=True` parameter to `get_payload` only decodes the `Content- Transfer-Encoding` wrapper, the `=`-encoding in this message. To get from there to characters is one of the many things the `email` package makes you do yourself: bytes = part.get_payload(decode=True) charset = part.get_content_charset('iso-8859-1') chars = bytes.decode(charset, 'replace') (`iso-8859-1` being the fallback in case the message specifies no encoding.)
pass array of dictionaries to python Question: I have a bash script that builds a dictionary kind of structure for multiple iterations as shown below: { "a":"b", "c":"d", "e":"f"} { "a1":"b1", "c1":"d1", "e1":"f1", "g1":"h1" } I have appended all of them to an array in shell script and they are to be fed as an input to python script and I want the above data to be parsed as list of dictionaries. I tried some thing like this and it didn't work. var=({ "a":"b", "c":"d", "e":"f"} { "a1":"b1", "c1":"d1", "e1":"f1", "g1":"h1" }) function plot_graph { RESULT="$1" python - <<END from __future__ import print_function import pygal import os import sys def main(): result = os.getenv('RESULT') print(result) if __name__ == "__main__": main() END } plot_graph ${var[@]} Arguments are being split and they are not being treated as a single variable. Out of result will be :[ {"a":"b", ] where as I want the entire var value to be read as a string and then I can split it into multiple dictionaries. Please help me get over this. Answer: seems problem of `plot_graph $var` The following code should work: var="({ 'a':'b', 'c':'d', 'e':'f'} { 'a1':'b1', 'c1':'d1', 'e1':'f1', 'g1':'h1' })" echo $var function plot_graph { echo $1 RESULT="$1" python - <<END from __future__ import print_function import os import sys def main(): result = os.getenv('RESULT') print(result) if __name__ == "__main__": main() END } plot_graph "$var"
Conditional slicing in Scala Breeze Question: I try to slice a `DenseVector` based on a elementwise boolean condition on another `DenseVector`: import breeze.linalg.DenseVector val x = DenseVector(1.0,2.0,3.0) val y = DenseVector(10.0,20,0,30.0) // I want a new DenseVector containing all elements of y where x > 1.5 // i.e. I want DenseVector(20,0,30.0) val newy = y(x:>1.5) // does not give a DenseVector but a SliceVector With Python/Numpy, I would just write `y[x>1.5]` Answer: Using Breeze you have to use for comprehensions for filtering `DenseVector`s val y = DenseVector(10.0,20,0,30.0) val newY = for { v <- y if v > 1.5 } yield v // or to write it in one line val newY = for (v <- y if v > 1.5) yield v
I am getting error when printing the pixel value of the read image in python-opencv, TypeError: 'NoneType' object has no attribute '__getitem__' Question: I have freshly installed opencv, and checked that its properly installed by typing: pkg-config --modversion opencv at the command terminal. I started using pything-opencv for reading and displaying an image, but when I run my code, it throws an error: TypeError: 'NoneType' object has no attribute '**getitem** ' My code is very minimal, but not getting where is there error. The code which I am running is: import cv2 import numpy as np from matplotlib import pyplot as plt import argparse img = cv2.imread('messi5.jpg') print img print "end of file" It gives the output: None end of file When I write two more lines as this: px = img[100,100] print px then it throws error: Traceback (most recent call last): File "testing_opencv_python.py", line 23, in px = img[100,100] TypeError: 'NoneType' object has no attribute '**getitem** ' The same code runs perfectly on other systems. I would be thankful if you can point out the mistake. I basically want to install caffe, but when i did that i was getting error, and seems like it depends on opencv, thats whey I have installed opencv. Thanks and regards. Answer: The returned image is `None` (you can see it when you print it), which is causing the other error down the line. This is most likely due to specifying the wrong image path ('messi5.jpg'). In the the documentation [here](http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_image_display/py_image_display.html), it states: > Warning Even if the image path is wrong, it won’t throw any error, but print > img will give you None Either provide a correct path to 'messi5.jpg', or copy the image into your current directory (where you execute the python script).
python shapely: retrieve the x,y,z value of a LinearRing Question: I`m trying to plot a LinearRing that shows the difference of the crossover of two Polygons: import mpl_toolkits.mplot3d as a3 import matplotlib.pyplot as plt from shapely.geometry import Polygon fig = plt.figure() ax = Axes3D(fig) poly1 = Polygon([(220.0, 780, 500), (840, 780, 500), (840, 180, 500), (220.0, 180, 500)]) poly2 = Polygon ([(320.0, 380, 500), (740, 380, 500), (740, 180, 500), (320.0, 180, 500)]) dif = poly1.difference(poly2) I`d like to plot dif, however, when using: top1 = a3.art3d.Poly3DCollection([dif],alpha=0.6) I get an erros saying "TypeError: 'Polygon' object is not iterable" I therefore try to get the x,y,z coordinates of dif and plot them, but I`ve only managed to get the x,y ones. For the sake of testing, I currently feed-in the Z value manualy: z= [500,500,500,500,500,500,500,500] x,y = a.exterior.xy zipped = list(zip (x,y,z)) top1 = a3.art3d.Poly3DCollection([zipped],alpha=0.6) top1.set_color('wheat') top1.set_edgecolor('k') ax.add_collection3d(top1) ax.set_xlim(0, 1000) ax.set_ylim(0, 1000) ax.set_zlim(0, 1000) plt.show() I then get the plot I am after, but I`m looking for an easier way to plot dif. Answer: With the `exterior` attribute you are using, you can get the coords: top1 = a3.art3d.Poly3DCollection([dif.exterior.coords],alpha=0.6) Beware, with this code and also with your solution, you are loosing the interior rings of the polygon, if there are any. Notice that `coords` returns an iterable, if you want a list you must create it (`list(dif.exterior.coords)`). For a polygon with interior rings, you could handle it as in this example: poly = Point(0,0).buffer(500) poly = Polygon([ (x, y, 500) for x, y in poly.exterior.coords ]) spoly = Point(0,0).buffer(200) spoly = Polygon([ (x, y, 500) for x, y in spoly.exterior.coords ]) poly = poly.difference(spoly) all_coords = [p.coords for p in poly.interiors] all_coords.append(poly.exterior.coords) top1 = a3.art3d.Poly3DCollection(all_coords,alpha=0.6) IMHO, using shapely with 3D polygons is misleading, since shapely only handles 2D polygons, and will bring you problems if you spread the 3D part all over the code that uses Shapely. Besides, someone reading the code could think you are actually doing 3D math when you are not. Perhaps you can isolate all 2D computation with shapely, and after that add it the 3D part.
Display Path of a file in Tkinter using "browse" Button - Python Question: I have been reading through several posts regarding to Browse button issues in Tkinter but I could not find my answer. So I have wrote this code to get a directory path when clicking the browse button, and displaying this path in an entry field. It woks partly : a file browser window directly pops up when I run the script. I indeed get the path in the entry field but if I want then to change the folder using my Browse button it does not work. I dont want to have the browser poping up right from the start but only when I click on Browse ! Thanks for your answers from Tkinter import * from tkFileDialog import askdirectory window = Tk() # user input window MyText= StringVar() def DisplayDir(Var): feedback = askdirectory() Var.set(feedback) Button(window, text='Browse', command=DisplayDir(MyText)).pack() Entry(window, textvariable = MyText).pack() Button(window, text='OK', command=window.destroy).pack() mainloop() Answer: This is so easy -- you need to assign the path to a variable and then print it out: from tkinter import * root = Tk() def browsefunc(): filename = filedialog.askopenfilename() pathlabel.config(text=filename) browsebutton = Button(root, text="Browse", command=browsefunc) browsebutton.pack() pathlabel = Label(root) pathlabel.pack() **P.S.:** This is in Python 3. But the concept is _same_.
How to get GETted json data in Flask Question: I am implementing a REST API in Python using Flask. I have to get parameters to perform a query and return resources. To be aligned with `REST` principles, I am going to use a `GET` request for this operation. Given that there can be a lot of parameters, I want to send them through a `conf.json` file, for instance: {"parameter": "xxx"} I perform the request through `curl`: > $ curl -H "Content-Type: application/json" --data @conf.json -G > <http://localhost:8080/resources/> The request is redirected to the route with these operations: @resources.route('/resources/', methods=['GET']) def discover(): if request.get_json(): json_data=request.get_json() return jsonify(json_data) what I get back is: <head> <title>Error response</title> </head> <body> <h1>Error response</h1> <p>Error code 400. <p>Message: Bad request syntax ('GET /resources/?{"parameter": "xxx"} HTTP/1.1'). <p>Error code explanation: 400 = Bad request syntax or unsupported method. </body> Somebody knows how to get the json data and properly handle it in the request? Answer: `request.get_json()` looks for JSON data in the request _body_ (e.g. what a POST request would include). You put the JSON data in the _URL query string_ of a GET request instead. Your `curl` command sends your JSON un-escaped, and produces an invalid URL, so the server _rightly_ rejects that: http://localhost:8080/resources/?{"parameter": "xxx"} You can't have spaces in a URL, for example. You'd have to use `--data- urlencode` instead for this to be escaped properly: $ curl --data-urlencode @conf.json -G http://localhost:8080/resources/ Note that the `Content-Type` header is not needed here; you don't have any request body to record the content _of_. The adjusted `curl` command now sends a properly encoded URL: http://localhost:8080/resources/?%7B%22parameter%22%3A%20%22xxx%22%7D%0A%0A Access that data with `request.query_string`. You will also have to _decode_ the URL encoding again before passing this to `json.loads()`: from urllib import unquote json_raw_data = unquote(request.query_string) json_data = json.loads(json_raw_data) Take into account that many webservers put limits on how long a URL they'll accept. If you are planning on sending more than 4k characters in a URL this way, you really need to reconsider and use `POST` requests instead. That's 4k with the JSON data _URL encoded_ , which adds a considerable overhead.
Python memory when plotting figures in a loop Question: I am trying to print a sequence of images in a code loop. I will ultimately need to print around 1000 to show how my system varies with time. I have reviewed the methods outlined in [Matplotlib runs out of memory when plotting in a loop](http://stackoverflow.com/questions/2364945/matplotlib-runs-out-of- memory-when-plotting-in-a-loop) but I still can't make the code produce more than 96 images or so. The code I am using in its stripped out form is below import numpy as np import matplotlib as mpl import os def pltHM(graphname,graphtext,xAxis,yAxis,xMn,xMx,xCnt,yMn,yMx,yCnt,TCrt): plt = mpl.pyplot fig = plt.figure(figsize=(8,7), dpi=250) cmap = mpl.cm.jet norm = mpl.colors.Normalize(vmin=-3, vmax=3) X = np.linspace(xMn,xMx,xCnt) Y = np.linspace(yMn,yMx,yCnt) plt.xlabel(xAxis) plt.ylabel(yAxis) plt.pcolormesh(X,Y,TCrt, cmap=cmap,norm=norm) plt.grid(color='w') plt.suptitle(graphtext, fontsize=14) plt.colorbar() plt.savefig(graphname, transparent = True) plt.cla() plt.clf() plt.close(fig) del plt del fig return This is used in a simple loop as shown below for loop1 in range(0,10): for loop2 in range(0,100): saveName = 'Test_Images/' + str(loop1) + '_' + str(loop2) + '.png' plotHeatMap(saveName,'Test','X','Y',-35,35,141,-30,30,121,Z) Any advice on why the above is not releasing memory and causing the traceback message RuntimeError: Could not allocate memory for image Many thanks for any help provided Answer: Here is one stripped example of what you can do. As pointed out by Ajean, you should NOT import plt every time as you did! It is enough once. Also, do not delete the figure and create a new one...it is better to use the same figure and just replace the data. import numpy as np import matplotlib.pyplot as plt def plotHeatMap(fig, line, x, y, graphname): line.set_data(x, y) fig.canvas.draw() fig.savefig(graphname) fig1, ax1 = plt.subplots(1, 1) line, = ax1.plot([],[]) ax1.set_xlim(0, 1) ax1.set_ylim(0, 1) for loop1 in range(0, 2): for loop2 in range(0, 2): x = np.random.random(100) y = np.random.random(100) save_name = 'fig_'+str(loop1) + '_' + str(loop2) + '.png' plotHeatMap(fig1, line, x, y, save_name)
Trouble With Python Code Question: I am taking the introduction to computer science class at Udacity and for one of the assignments I must write code that will take all the links from a webpage. Here is the code def get_next_target(page): start_link = page.find('<a href=') while True: if start_link == -1: x, y = None, 0 return x, y break start_quote = page.find('"', start_link) end_quote = page.find('"', start_quote + 1) url = page[start_quote + 1:end_quote] return url, end_quote When I run samples, it seems to work, but when I submit my code, I get the result that my submission did not terminate. What does this mean? What is the issue with my code? Answer: def get_next_target(page, start=0): """ function find link in part of page """ start_link = page[start:].find('<a href=') if start_link == -1: x, y = None, None return x, y start_quote = page.find('"', start_link) end_quote = page.find('"', start_quote + 1) url = page[start_quote + 1:end_quote] return url, end_quote def find_all(page): """ function find all links""" length = len(page) current_position = 0 # we start with full page urls = [] while current_position < length: # get url and set current_positon, so next we gonna search # only part of page url, current_position = get_next_target(page, current_position) urls.append(url) if current_position is None: return urls return urls But I would recommand use regular expressions - something like: def find_all(page): import re return re.findall('<a href="(.+)"', page) **Edit:** But neither solution will detect links like: <a href="some/page">, or <a tilte="ti" href="some/page" > for this you will need recreate the regular expression. It is the best option IMHO.
How to upload multiple answers with image bytestring data? Question: According to Consumer Surveys docs, the `questions[].images[].data` field takes a bytes datatype. I'm using Python 3 for implementation, but the API is giving errors like `Invalid ByteString` or bytes type `is not JSON serializable.` I'm using the following code: import base64 import urllib url = 'http://example.com/image.png' raw_img = urllib.request.urlopen(url).read() # is not JSON serializable due to json serializer not being able to serialize raw bytes img_data = raw_img # next errors: Invalid ByteString, when tried with base64 encoding as followings: img_data = base64.b64encode(raw_img) # Also tried decoding it to UTF.8 `.decode('utf-8')` `img_data` is part of the JSON payload that is being sent to the API. Am I missing something? what's the correct way to handle image data upload for questions? I looked into `https://github.com/google/consumer- surveys/tree/master/python/src` but there is not example of this part. Thanks Answer: You need to use web-safe/URL-safe encoding. Here's some documentation on doing this in Python: <https://pymotw.com/2/base64/#url-safe-variations> In your case, this would look like img_data = base64.urlsafe_b64encode(raw_img) **ETA:** In Python 3, the API expects the image data to be of type `str` so it can be JSON serialized, but the `base64.urlsafe_b64encode` method returns the data in the form of UTF-8 `bytes`. You can fix this by converting the bytes to Unicode: img_data = base64.urlsafe_b64encode(raw_img) img_data = img_data.decode('utf-8')
Avoid duplicate result Multithread Python Question: I'm trying to make my actual crawler Multithread. When I set the Multithread, several instance of the function will be started. **Exemple :** If my function I use `print range(5)` and I will have `1,1,2,2,3,3,4,4,5,5` if I have 2 Thread. How can can I have the result `1,2,3,4,5` in Multithread ? My actual code is a crawler as you can see under : import requests from bs4 import BeautifulSoup def trade_spider(max_pages): page = 1 while page <= max_pages: url = "http://stackoverflow.com/questions?page=" + str(page) source_code = requests.get(url) plain_text = source_code.text soup = BeautifulSoup(plain_text, "html.parser") for link in soup.findAll('a', {'class': 'question-hyperlink'}): href = link.get('href') title = link.string print(title) get_single_item_data("http://stackoverflow.com/" + href) page += 1 def get_single_item_data(item_url): source_code = requests.get(item_url) plain_text = source_code.text soup = BeautifulSoup(plain_text, "html.parser") res = soup.find('span', {'class': 'vote-count-post '}) print("UpVote : " + res.string) trade_spider(1) How can I call `trade_spider()` in Multithread without duplicate link ? Answer: Have the page number be an argument to the `trade_spider` function. Call the function in each process with a different page number so that each thread gets a unique page. For example: import multiprocessing def trade_spider(page): url = "http://stackoverflow.com/questions?page=%s" % (page,) source_code = requests.get(url) plain_text = source_code.text soup = BeautifulSoup(plain_text, "html.parser") for link in soup.findAll('a', {'class': 'question-hyperlink'}): href = link.get('href') title = link.string print(title) get_single_item_data("http://stackoverflow.com/" + href) # Pool of 10 processes max_pages = 100 num_pages = range(1, max_pages) pool = multiprocessing.Pool(10) # Run and wait for completion. # pool.map returns results from the trade_spider # function call but that returns nothing # so ignoring it pool.map(trade_spider, num_pages)
Export and Download a list to csv file using Python Question: I have a list: lista.append(rede) When I print it , is showing: [{'valor': Decimal('9000.00'), 'mes': 'Julho', 'nome': 'ALFANDEGA 1'}, {'valor': Decimal('12000.00'), 'mes': 'Julho', 'nome': 'AMAZONAS SHOPPING 1'}, {'valor': Decimal('600.00'), 'mes': 'Agosto', 'nome': 'ARARUAMA 1'}, {'valor': Decimal('21600.00'), 'nome': 'Rede Teste Integra\xc3\xa7\xc3\xa3o'}, {'valor': Decimal('3000.00'), 'mes': 'Agosto', 'nome': 'Mercatto Teste 1'}, {'valor': Decimal('5000.00'), 'mes': 'Agosto', 'nome': 'Mercatto Teste 2'}, {'valor': Decimal('8000.00'), 'nome': 'Rede Teste Integra\xc3\xa7\xc3\xa3o 2'}] I would like to export to csv file and download it, could you help me? Answer: You can use this code to transform your data to csv : def Decimal(value): #quick and dirty deal with your Decimal thing in the json return value data = [{'valor': Decimal('9000.00'), 'mes': 'Julho', 'nome': 'ALFANDEGA 1'}, {'valor': Decimal('12000.00'), 'mes': 'Julho', 'nome': 'AMAZONAS SHOPPING 1'}, {'valor': Decimal('600.00'), 'mes': 'Agosto', 'nome': 'ARARUAMA 1'}, {'valor': Decimal('21600.00'), 'nome': 'Rede Teste Integra\xc3\xa7\xc3\xa3o'}, {'valor': Decimal('3000.00'), 'mes': 'Agosto', 'nome': 'Mercatto Teste 1'}, {'valor': Decimal('5000.00'), 'mes': 'Agosto', 'nome': 'Mercatto Teste 2'}, {'valor': Decimal('8000.00'), 'nome': 'Rede Teste Integra\xc3\xa7\xc3\xa3o 2'}] mes = [] nome = [] valor = [] for i in data: mes.append(i.get('mes',"")) nome.append(i.get('nome',"")) valor.append(i.get('valor',"")) import csv f = open("file.csv", 'wt') try: writer = csv.writer(f) writer.writerow( ('mes', 'nome', 'valor') ) for i in range(0,len(mes)): writer.writerow((mes[i], nome[i], valor[i])) finally: f.close()
Analyzing data from continuous input stream (recording) in Python, multiprocessing? Question: I am analyzing data coming from recording with my microphone in real-time. So far I have been doing this in a linear fashion: * Record one second (takes 1s) * Analyze the data (takes for example 50 ms) * Record one second * Analyze the data And so forth. This obviously means that while I'm analyzing the data from the past second, I am losing this 50 ms of time, I won't be recording the sounds during it. I thought multiprocessing would be the solution: I start a seperate proces that non-stop records in certain length chunks and each time sends it through a pipe to the the main proces, which then analyzes the data. Unfortunately, sending a lot of data through a pipe (or in general, sending a lot of data from one proces to another), is far from ideal apparently. Is there any other way to do this? I just want my computer to record data and import it into python (all of which I'm already doing), while it's also analyzing data. If I need to add any more details, let me know! Thanks! Answer: Simple producer/consumer implementation. While it's true moving data back and forth induces overhead and increases memory use, as long as the same data is not needed by more than one process that overhead is minimal. Try it and find out :) Can adjust memory footprint by changing the queue and pool size numbers. Threading is another option to reduce memory use but at the expense of being blocked on the GIL and effectively single threaded if the processing is in python bytecode. import multiprocessing # Some fixed size to avoid run away memory use recorded_data = multiprocessing.Queue(100) def process(recorded_data): while True: data = recorded_data.get() <process data> def record(recorded_data): while data in input_stream: recorded_data.put(data) producer = multiprocessing.Process(target=record, args=(recorded_data,)) producer.start() # Pool of 10 processes num_proc = 10 consumer_pool = multiprocessing.Pool(num_proc) results = [] for _ in xrange(num_proc): results.append( consumer_pool.apply_async(process, args=(recorded_data,))) producer.join() # If processing actually returns something for result in results: print result # Consumers wait for data from queue forever # so terminate them when done consumer_pool.terminate()
Python Reportlab combine paragraph Question: I hope you can help me trying to combine a paragraph, my style is called "cursiva" and works perfectly also I have other's but it's the same if I change cursiva to other one. the issue is that If I use this coude o get this. [![enter image description here](http://i.stack.imgur.com/OBlSY.png)](http://i.stack.imgur.com/OBlSY.png) As you can see guys it shows with a line break and I need it shows togetter. The problem is that i need to make it like this (one, one) togetter because I need to use two styles, the issue here is that I'm using arial narrrow so if I use italic or bold I need to use each one by separate because the typography does not alow me to use "< i >italic text< /i > ", so I need to use two different styles that actually works fine by separate. how can I achive this? cursiva = ParagraphStyle('cursiva') cursiva.fontSize = 8 cursiva.fontName= "Arialni" incertidumbre=[] incertidumbre.extend([Paragraph("one", cursiva), Paragraph("one", cursiva)]) Thank you guys Answer: The question you are asking is actually caused by a workaround for a different problem, namely that you don't know how to register font families in Reportlab. Because that is what is needed to make `<i>` and `<b>` work. So you probably already managed to add a custom font, so the first part should look familiar, the final line is probably the missing link. It is registering the combination of these fonts a family. from reportlab.pdfbase.pdfmetrics import registerFontFamily pdfmetrics.registerFont(TTFont('Arialn', 'Arialn.ttf')) pdfmetrics.registerFont(TTFont('Arialnb', 'Arialnb.ttf')) pdfmetrics.registerFont(TTFont('Arialni', 'Arialni.ttf')) pdfmetrics.registerFont(TTFont('Arialnbi', 'Arialnbi.ttf')) registerFontFamily('Arialn',normal='Arialn',bold='Arialnb',italic='Arialni',boldItalic='Arialnbi')
Set shell environment variable via python script Question: I have some instrument which requires environment variable which I want to set automatically from python code. So I tried several ways to make it happen, but none of them were successful. Here are some examples: 1. I insert following code in my python script import os os.system("export ENV_VAR=/some_path") 2. I created bash script(env.sh) and run it from python: #!/bin/bash export ENV_VAR=some_path #call it from python os.system("source env.sh") 3. I also tried _os.putenv()_ and _os.environ_["ENV_VAR"] = "some_path" > Is it possible to set(export) environment variable using python, i.e without > directly exporting it to shell? Answer: Setting an environment variable sets it only for the current process and any child processes it launches. So using `os.system` will set it only for the shell that is running to execute the command you provided. When that command finishes, the shell goes away, and so does the environment variable. Setting it using `os.putenv` or `os.environ` has a similar effect; the environment variables are set for the Python process and any children of it. I assume you are trying to have those variables set for the shell that you launch the script from, or globally. That can't work because the shell (or other process) is not a child of the Python script in which you are setting the variable.
Selenium Clicks but Doesn't Select Question: I'm working on making an automated web scraper run on Khan Academy to make offline backups of questions using selenium and a python scraper (to come later). I'm currently working on getting it to answer questions (right or wrong doesn't matter) to proceed through exercises. Unfortunately, selenium's .click() function doesn't actually select an answer. I think it has something to do with being pointed at the wrong object but I can't tell. It currently highlights the option, but doesn't select it. [HTML for a single option (out of 4)](http://pastebin.com/RXwDUAAw) I made some code to reproduce the error, and hooked it up to a test account for all your debugging needs. Thanks. from selenium import webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Firefox() # gets us to the SAT Math Exercise page driver.get('https://www.khanacademy.org/mission/sat/tasks/5505307403747328') # these next lines just automate logging in. Nothing special. login = driver.find_element_by_name('identifier') login.send_keys('stackflowtest') # look, I made a new account just for you guys passw = driver.find_element_by_name('password') passw.send_keys('stackoverflow') button = driver.find_elements_by_xpath("//*[contains(text(), 'Sign in')]") button[1].click() driver.implicitly_wait(5) # wait for things to become visible radio = driver.find_element_by_class_name('perseus-radio-option') radio.click() check = driver.find_element_by_xpath("//*[contains(text(), 'Check answer')]") check.click() Answer: After a trial and error process, I found that the actual click selection can be accomplished by pointing to the element with class "description" from selenium import webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Firefox() # gets us to the SAT Math Exercise page driver.get('https://www.khanacademy.org/mission/sat/tasks/5505307403747328') # these next lines just automate logging in. Nothing special. login = driver.find_element_by_name('identifier') login.send_keys('stackflowtest') # look, I made a new account just for you guys passw = driver.find_element_by_name('password') passw.send_keys('stackoverflow') button = driver.find_elements_by_xpath("//*[contains(text(), 'Sign in')]") button[1].click() driver.implicitly_wait(5) # wait for things to become visible radio = driver.find_element_by_class_name('description') radio.click() check = driver.find_element_by_xpath("//*[contains(text(), 'Check answer')]") check.click() For people dealing with similar issues, I would recommend clicking on the very edge of the space that allows you to select and inspecting the element there. This prevents you from accidentally using one of the innermost tags.
Retrieve Cookie From Akka HttpResponse Question: I'm trying to retrieve a cookie from Akka HttpResponse > val httpRequest = HttpRequest(method=HttpMethods.POST, uri = uri, > entity=params) val responseFuture: Future[HttpResponse] = > Http().singleRequest(HttpRequest(uri = uri)) > > > responseFuture2.flatMap { response => > println(response.entity) > response.headers.collect { > case hc => > println(hc) > } > } > However I cannot find the cookie value in either response entity or response headers. I believe that the cookie jar should have been supported already in akka. <https://github.com/spray/spray/pull/311> Does anyone has idea how can I retrieve the cookie by utilizing akka? Thanks in ad! Here's how I've done it in python. > cookie_jar = cookielib.CookieJar() non_redirecting_opener = > urllib2.build_opener(NoRedirectionProcessor, > urllib2.HTTPCookieProcessor(cookie_jar)) response = > non_redirecting_opener.open(request) cookies = {cookie.name: cookie for > cookie in cookie_jar} Answer: It'll do the trick: import akka.http.scaladsl.model.headers._ val responseFuture: Future[HttpResponse] = Http(context.system).singleRequest(HttpRequest(uri = "http://localhost:8080")) responseFuture.onComplete(response => { val cookies = response.get.headers.collect { case c: `Set-Cookie` => c.cookie } println(cookies) }) But you should not operate on Future directly but send it through actor system.
How to transform user input string into correct object type Question: I am using Python (2.7) along with Natural Language Toolkit (3.2.1) and WordNet. I am _very_ new to programming. I am trying to write a program which asks the user for a word, then prints synonym sets for that word, then asks the user which synonym set it wants to see the lemmas for. The problem is that `raw_input` only accepts strings, so when I try to use the method `.lemma_names()` on the user input, I get the error `AttributeError: 'str' object has no attribute 'lemma_names'`. Here is the code: from nltk.corpus import wordnet as wn w1 = raw_input ("What is the word? ") #This prints the synsets for w1, thus showing them what format to use in the next question. for synset in wn.synsets(w1): print synset #This asks the user to choose the synset of w1 that interests them. synset1 = raw_input ("Which sense are you looking for? [Use same format as above]") #This prints the lemmas from the synset of interest. for x in synset1.lemma_names(): print x My question is, how do I transform the user's input from a string to a synset type which I can use the `.lemma_names()` method on? I apologize if this question is so basic as to be off-topic. If so, let me know. Answer: Try this: from nltk.corpus import wordnet as wn w1 = raw_input ("What is the word? ") synset_dict = dict() for synset in wn.synsets(w1): name = synset.name() synset_dict[name] = synset print name synset1 = raw_input ("Which sense are you looking for? [Use same format as above] ") if synset1 in synset_dict: synset = synset_dict[synset1] for lemma in synset.lemma_names(): print lemma
Python Avro writer.append doesn't work when a json string is passed as a variable. Question: Avro Schema file: user.avsc {"namespace": "example.avro", "type": "record", "name": "User", "fields": [ {"name": "TransportProtocol", "type": "string"} ] } Pasting my code snippet that works:- import json from avro import schema, datafile, io import avro.schema from avro.datafile import DataFileReader, DataFileWriter from avro.io import DatumReader, DatumWriter schema = avro.schema.parse(open("user.avsc").read()) writer = DataFileWriter(open("users.avro", "w"), DatumWriter(), schema) writer.append({"TransportProtocol": "udp"}) writer.close() Pasting my code snippet that doesn't work:- dummy_json = '{"TransportProtocol": "udp"}' schema = avro.schema.parse(open("user.avsc").read()) writer = DataFileWriter(open("users.avro", "w"), DatumWriter(), schema) writer.append(dummy_json) writer.close() When I pass the json string as it is in the append function, it words and I get the desired avro output. But if I initialize the json string to a variable and then try to pass that variable in the append function, it doesn't work and throws an error:- avro.io.AvroTypeException: The datum {"TransportProtocol": "udp"} is not an example of the schema { Any help?Thanks Answer: I think that might be due to the fact that in your first example you actually pass a dictionary `{"TransportProtocol": "udp"}`, not a string. But in the second one, you pass a string `'{"TransportProtocol": "udp"}'`. Check this out (<http://avro.apache.org/docs/1.7.6/gettingstartedpython.html>): > We use DataFileWriter.append to add items to our data file. Avro records are > represented as Python dicts. So basically, you can't pass string as a parameter.
How to use a python function from another module Question: I trying to import a module and use a function from that module in my current python file. I run the nosetests on the parser_tests.py file but it fails with "name 'parse_subject' not defined" e.g its not finding the parse_subject function which is clearly defined in the parsrer.py file This is the parsrer file: def peek(word_list): if word_list: word = word_list[0] return word[0] else: return None #Confirms that the expected word is the right type, def match(word_list, expecting): if word_list: word = word_list.pop(0) if word[0] == expecting: return word else: return None else: return None def skip(word_list, word_type): while peek(word_list) == word_type: match(word_list, word_type) def parse_verb(word_list): skip(word_list, 'stop') if peek(word_list) == 'verb': return match(word_list, 'verb') else: raise ParserError("Expected a verb next.") def parse_object(word_list): skip(word_list, 'stop') next_word = peek(word_list) if next_word == 'noun': return match(word_list, 'noun') elif next_word == 'direction': return match(word_list, 'direction') else: raise ParserError("Expected a noun or direction next.") def parse_subject(word_list): skip(word_list, 'stop') next_word = peek(word_list) if next_word == 'noun': return match(word_list, 'noun') elif next_word == 'verb': return ('noun', 'player') else: raise ParserError("Expected a verb next.") def parse_sentence(word_list): subj = parse_subject(word_list) verb = parse_verb(word_list) obj = parse_object(word_list) return Sentence(subj, verb, obj) This is my tests file from nose.tools import * from nose.tools import assert_equals import sys sys.path.append("h:/projects/projectx48/ex48") import parsrer def test_subject(): word_list = [('noun', 'bear'), ('verb', 'eat'), ('stop', 'the'), ('noun', 'honey')] assert_equals(parse_subject(word_list), ('noun','bear')) Answer: # Import Module You can either import the whole module or you can specifically import that specific function using **from an example keyword**. For example: from parser_tests import parse_subject # and then you can invoke the function. parse_subject()
Python: replace string with regex Question: my problem here is that I have a huge amount of files. Each xml file contains ID. And I got set of source and target files. Source files has name A and ID = B. Target file has name B and ID=B What I need to do is to match source ID B with Target name B and then replace target ID=B with source name A. Hope its clear Here is my code import os import re sourcepath = input('Path to source folder:\n') targetpath = input('Path to target folder:\n') for root,dir,source in os.walk(sourcepath): for each_file in source: os.chdir(root) correctID = each_file[:16] each_xml = open(each_file, 'r', encoding='utf8').read() findsourceID = re.findall('id="\w{3}\d{13}"', each_xml) StringID = str(findsourceID) correctFilename = StringID[6:22] IDtoreplace = 'id="' + correctID + '"' print(IDtoreplace) for main,folder,target in os.walk(targetpath): for each_target in target: os.chdir(main) targetname = each_target[:16] if targetname == correctFilename: with open(each_target, 'r+', encoding='utf8') as each_targ: each_targ.read() findtargetID = re.sub('id="\w{3}\d{13}"',str(IDtoreplace), each_targ) each_targ.close() And here is the error File "C:/Users/ms/Desktop/Project/test.py", line 23, in <module> findtargetID = re.sub('id="\w{3}\d{13}"',str(IDtoreplace), each_targ) File "C:\Users\ms\AppData\Local\Programs\Python\Python35\lib\re.py", line 182, in sub return _compile(pattern, flags).sub(repl, string, count) TypeError: expected string or bytes-like object Answer: You `read()` from `each_targ` but you don't store the string anywhere. Instead you pass the file handle `each_targ` to `.sub` and that causes the type mismatch here. You could just say: findtargetID = re.sub('id="\w{3}\d{13}"',str(IDtoreplace), each_targ.read())
Django tutorial part 3 - NoReverseMatch at /polls/ Question: I have been following the [Django tutorial part 3](https://docs.djangoproject.com/en/1.10/intro/tutorial03/) and am getting the following error when I attempt to view <http://localhost:8000/polls/>: **Reverse for 'detail' with arguments '('',)' and keyword arguments '{}' not found. 1 pattern(s) tried: [u'polls/(?P<question_id>[0-9]+)/$']** My files are as follows: mysite/urls.py: from django.conf.urls import include, url from django.contrib import admin urlpatterns = [ url(r'^polls/', include('polls.urls', namespace="polls")), url(r'^admin/', admin.site.urls), ] polls/urls.py: from django.conf.urls import url from . import views app_name = 'polls' urlpatterns = [ url(r'^$', views.index, name='index'), url(r'^(?P<question_id>[0-9]+)/$', views.detail, name='detail'), url(r'^(?P<question_id>[0-9]+)/results/$', views.results, name='results'), url(r'^(?P<question_id>[0-9]+)/vote/$', views.vote, name='vote'), ] polls/detail.html: <h1>{{ question.question_text }}</h1> <ul> {% for choice in question.choice_set.all %} <li>{{ choice.choice_text }}</li> {% endfor %} </ul> polls/templates/polls/index.html: <li><a href="{% url 'polls:detail' question.id %}">{{ question.question_text }}</a></li> **What does this error mean?** **How do I debug it?** **Can you suggest a fix?** N.b. I have seen and tried the answers to the similar questions: [NoReverseMatch at /polls/ (django tutorial)](http://stackoverflow.com/questions/33151816/noreversematch-at- polls-django-tutorial?rq=1) [Django 1.8.2 -- Tutorial Chapter 3 -- Error: NoReverseMatch at /polls/ -- Python 3.4.3](http://stackoverflow.com/questions/31103954/django-1-8-2-tutorial- chapter-3-error-noreversematch-at-polls-python) [NoReverseMatch - Django 1.7 Beginners tutorial](http://stackoverflow.com/questions/27645132/noreversematch- django-1-7-beginners-tutorial?rq=1) [Django: Reverse for 'detail' with arguments '('',)' and keyword arguments '{}' not found](http://stackoverflow.com/questions/19336076/django-reverse-for-detail- with-arguments-and-keyword-arguments-n#19336837) <https://groups.google.com/forum/#!msg/django-users/etSR78dgKBo/euSYcSyMCgAJ> [NoReverseMatch at /polls/ (django tutorial)](http://stackoverflow.com/questions/33151816/noreversematch-at- polls-django-tutorial?rq=1) <https://www.reddit.com/r/django/comments/3d43gb/noreversematch_at_polls1results_in_django/> Edit, I initially missed the following question. Its excellent answer partially answers my question (how to debug) but does not cover my specific problem. [What is a NoReverseMatch error, and how do I fix it?](http://stackoverflow.com/questions/38390177/what-is-a-noreversematch- error-and-how-do-i-fix-it) Answer: This was the problem: polls/templates/polls/index.html should have been: {% if latest_question_list %} <ul> {% for question in latest_question_list %} <li><a href="{% url 'polls:detail' question.id %}">{{ question.question_text }}</a></li> {% endfor %} </ul> {% else %} <p>No polls are available.</p> {% endif %} I had inadvertantly replaced the entire file with the following line, rather than just updating the relevant line (implied [here](https://docs.djangoproject.com/en/1.10/intro/tutorial03/#namespacing- url-names)): <li><a href="{% url 'polls:detail' question.id %}">{{ question.question_text }}</a></li> As stated by @Sayse in the comments, this would mean that question.id is empty resulting in the error.
How can I record a sound and play it back after a user-defined delay in Python? Question: I'm looking for a Python's code that would record a sound and play it back after a certain delay (for example 10 seconds). In other words, I would like to constantly hear (on my headphones) what's going on outside, but with a certain delay. I found a Python script on GitHub (<https://gist.github.com/larsyencken/5641402>) that is supposed to do what I'm looking for. However, when I run the script, the playing starts after 5 seconds (default delay), but it records everything around and plays it in real time (without any delay). Answer: Here is an example using `sounddevice` , although you can do it using other `audio/sound` modules as well. Below example records audio from microphone for `#seconds` as per variable `duration` which you can modify as per your requirements. Same contents are played back using standard audio output (speakers). More on this [here](http://python-sounddevice.readthedocs.io/en/0.3.4/) **Working Code** import sounddevice as sd import numpy as np import scipy.io.wavfile as wav fs=44100 duration = 10 # seconds myrecording = sd.rec(duration * fs, samplerate=fs, channels=2, dtype='float64') print "Recording Audio for %s seconds" %(duration) sd.wait() print "Audio recording complete , Playing recorded Audio" sd.play(myrecording, fs) sd.wait() print "Play Audio Complete" **Output** Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. >>> ================================ RESTART ================================ >>> Recording Audio for 10 seconds Audio recording complete , Playing recorded Audio Play Audio Complete >>>
Transform 3D polygon to 2D, perform clipping, and transform back to 3D Question: My problem is that I have two walls, represented as 2D planes in 3D space, (`wallA` and `wallB`). These walls are overlapping. I need to convert that into three wall sections, one for the `wallA.intersect(wallB)`, one for `wallA.diff(wallB)`, and one for `wallB.diff(wallA)`. What I think I need to to do is rotate them both into 2D space, without changing their overlaps, perform the clipping to identify the diffs and intersect, then rotate the new walls back into the original plane. The walls are not necessarily vertical, otherwise the problem might be simpler. The clipping part of my problem is easily solved in 2D, using `pyclipper`. What I'm having trouble with is the algorithm for recoverably rotating the walls into 2D. From what I can understand, something similar to but not exactly the same as the steps in [this question](http://stackoverflow.com/questions/6023166/rotating-a-3d-polygon- into-xy-plane-while-maintaining-orientation). I've looked at [`transforms3D`](https://pypi.python.org/pypi/transforms3d) which looks really useful, but can't quite understand which, or what combination, of the functions I need to use to reproduce that algorithm. Here's an example of what I'm trying to achieve, using a really simple example of a pair of 2 x 2 vertical surfaces that have an overlapping 1 x 1 square in one corner. import pyclipper as pc wallA= [(0,0,2), (2,0,2), (2,0,0), (0,0,0)] wallB = [(1,0,3), (3,0,3), (3,0,1), (1,0,1)] expected_overlaps = [[(1,0,2), (2,0,2), (2,0,1), (1,0,1)]] wallA_2d = transform_to_2D(wallA, <whatever else is needed>) wallB_2d = transform_to_2D(wallB, <whatever else is needed>) scaledA = pc.scale_to_clipper(wallA_2d) scaledB = pc.scale_to_clipper(wallB_2d) clipper = pc.Pyclipper() clipper.AddPath(scaledA, poly_type=pc.PT_SUBJECT, closed=True) clipper.AddPath(scaledB, poly_type=pc.PT_CLIP, closed=True) # just showing the intersection - differences are handled similarly intersections = clipper.Execute( pc.CT_INTERSECTION, pc.PFT_NONZERO, pc.PFT_NONZERO) intersections = [pc.scale_from_clipper(i) for i in intersections] overlaps = [transform_to_3D(i, <whatever else is needed>) for i in intersections] assert overlaps == expected_overlaps What I'm looking for is an explanation of the steps required to write `transform_to_2d` and `transform_to_3d`. Answer: Rather than rotating, you can simply project. The key is to map the 3d space onto a 2d plane in a way that you can then reverse. (Any distortion resulting from the projection will be undone when you map back.) To do this, you should first find the plane that contains both of your walls. Here is some example code: wallA = [(0,0,2), (2,0,2), (2,0,0), (0,0,0)] wallB = [(1,0,3), (3,0,3), (3,0,1), (1,0,1)] v = (0, 1, 0) # the normal vector a = 0 # a number so that v[0] * x + v[1] * y + v[2] * z = a is the equation of the plane containing your walls # To calculate the normal vector in general, # you would take the cross product of any two # vectors in the plane of your walls, e.g. # (wallA[1] - wallA[0]) X (wallA[2] - wallA[0]). # You can then solve for a. proj_axis = max(range(3), key=lambda i: abs(v[i])) # this just needs to be any number such that v[proj_axis] != 0 def project(x): # Project onto either the xy, yz, or xz plane. (We choose the one that avoids degenerate configurations, which is the purpose of proj_axis.) # In this example, we would be projecting onto the xz plane. return tuple(c for i, c in enumerate(x) if i != proj_axis) def project_inv(x): # Returns the vector w in the walls' plane such that project(w) equals x. w = list(x) w[proj_axis:proj_axis] = [0.0] c = a for i in range(3): c -= w[i] * v[i] c /= v[proj_axis] w[proj_axis] = c return tuple(w) projA = [project(x) for x in wallA] projB = [project(x) for x in wallB] proj_intersection = intersection(projA, projB) # use your 2d algorithm here intersection = [project_inv(x) for x in proj_intersection] # this is your intersection in 3d; you can do similar things for the other pieces
Mock entire python class Question: I'm trying to make a simple test in python, but I'm not able to figure it out how to accomplish the mocking process. This is the class and def code: class FileRemoveOp(...) @apply_defaults def __init__( self, source_conn_keys, source_conn_id='conn_default', *args, **kwargs): super(v4FileRemoveOperator, self).__init__(*args, **kwargs) self.source_conn_keys = source_conn_keys self.source_conn_id = source_conn_id def execute (self, context) source_conn = Connection(conn_id) try: for source_conn_key in self.source_keys: if not source_conn.check_for_key(source_conn_key): logging.info("The source key does not exist") source_conn.remove_file(source_conn_key,'') finally: logging.info("Remove operation successful.") And this is my test for the execute function: @mock.patch('main.Connection') def test_remove_execute(self,MockConn): mock_coon = MockConn.return_value mock_coon.value = #I'm not sure what to put here# remove_operator = FileRemoveOp(...) remove_operator.execute(self) Since the **execute** method try to make a connection, I need to mock that, I don't want to make a real connection, just return something mock. How can I make that? I'm used to do testing in Java but I never did on python.. Answer: First it is very important to understand that you always need to Mock where it the thing you are trying to mock out is used as stated in the `unittest.mock` documentation. > The basic principle is that you patch where an object is looked up, which is > not necessarily the same place as where it is defined. Next what you would need to do is to return a `MagicMock` instance as `return_value` of the patched object. So to summarize this you would need to use the following sequence. * Patch Object * prepare `MagicMock` to be used * return the `MagicMock` we've just created as `return_value` Here a quick example of a project. **connection.py (Class we would like to Mock)** class Connection(object): def execute(self): return "Connection to server made" **file.py (Where the Class is used)** from project.connection import Connection class FileRemoveOp(object): def __init__(self, foo): self.foo = foo def execute(self): conn = Connection() result = conn.execute() return result **tests/test_file.py** import unittest from unittest.mock import patch, MagicMock from project.file import FileRemoveOp class TestFileRemoveOp(unittest.TestCase): def setUp(self): self.fileremoveop = FileRemoveOp('foobar') @patch('project.file.Connection') def test_execute(self, connection_mock): # Create a new MagickMock instance which will be the # `return_value` of our patched object connection_instance = MagicMock() connection_instance.execute.return_value = "testing" # Return the above created `connection_instance` connection_mock.return_value = connection_instance result = self.fileremoveop.execute() expected = "testing" self.assertEqual(result, expected) def test_not_mocked(self): # No mocking involved will execute the `Connection.execute` method result = self.fileremoveop.execute() expected = "Connection to server made" self.assertEqual(result, expected)
How to live steam webcam using python-flask and opencv? Question: I want to set up a web server using python-flask. I tried to follow the tutorial form [chioka.in](http://www.chioka.in/python-live-video-streaming- example/) , but when I run the app it initializes but when i try to access the address from localhost I get this error. * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) 127.0.0.1 - - [18/Aug/2016 02:15:46] "GET / HTTP/1.1" 200 - Traceback (most recent call last): File "main.py", line 22, in <module> app.run() File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 843, in run run_simple(host, port, self, **options) File "/usr/local/lib/python2.7/dist-packages/werkzeug/serving.py", line 694, in run_simple inner() File "/usr/local/lib/python2.7/dist-packages/werkzeug/serving.py", line 659, in inner srv.serve_forever() File "/usr/local/lib/python2.7/dist-packages/werkzeug/serving.py", line 499, in serve_forever HTTPServer.serve_forever(self) File "/usr/lib/python2.7/SocketServer.py", line 238, in serve_forever self._handle_request_noblock() File "/usr/lib/python2.7/SocketServer.py", line 297, in _handle_request_noblock self.handle_error(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock self.process_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request self.finish_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python2.7/SocketServer.py", line 655, in __init__ self.handle() File "/usr/local/lib/python2.7/dist-packages/werkzeug/serving.py", line 216, in handle rv = BaseHTTPRequestHandler.handle(self) File "/usr/lib/python2.7/BaseHTTPServer.py", line 340, in handle self.handle_one_request() File "/usr/local/lib/python2.7/dist-packages/werkzeug/serving.py", line 251, in handle_one_request return self.run_wsgi() File "/usr/local/lib/python2.7/dist-packages/werkzeug/serving.py", line 193, in run_wsgi execute(self.server.app) File "/usr/local/lib/python2.7/dist-packages/werkzeug/serving.py", line 183, in execute for data in application_iter: File "/usr/local/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 703, in __next__ return self._next() File "/usr/local/lib/python2.7/dist-packages/werkzeug/wrappers.py", line 81, in _iter_encoded for item in iterable: File "main.py", line 12, in gen frame = camera.get_frame() File "/media/moithil/STORAGE/PROJECTS/PROJECTS/webcam Server/camera.py", line 24, in get_frame return jpeg.tobytes() AttributeError: 'numpy.ndarray' object has no attribute 'tobytes' How to I convert this jpeg into bytes to return it properly? The code blocks with error are > from camera.py file import cv2 class VideoCamera(object): def __init__(self): self.video = cv2.VideoCapture(0) def __del__(self): self.video.release() def get_frame(self): success, image = self.video.read() ret, jpeg = cv2.imencode('.jpg', image) return jpeg.tobytes() > from main.py file @app.route('/') def index(): return render_template('index.html') def gen(camera): while True: frame = camera.get_frame() yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n') @app.route('/video_feed') def video_feed(): return Response(gen(VideoCamera()), mimetype='multipart/x-mixed-replace; boundary=frame') Answer: Your numpy version is probably < 1.9. According to the release notes (<http://docs.scipy.org/doc/numpy/release.html>): > ndarray.tobytes and MaskedArray.tobytes have been added as aliases for > tostring > which exports arrays as bytes. This is more consistent in Python > 3 where str and > bytes are not the same. Try using tostring instead, or upgrade numpy to latest version.
Create Nested JSON from Pandas for Org Chart Question: I'm trying to create a nested JSON object from a hierarchical DataFrame (python 3.5) to feed into JavaScript to render an Org Chart. I'm essentially trying to create the structure found in the answer of this question: [Organization chart - tree, online, dynamic, collapsible, pictures - in D3](http://stackoverflow.com/questions/30926539/organization-chart-tree- online-dynamic-collapsible-pictures-in-d3) An example dataframe: df = pd.DataFrame({\ 'Manager_Name':['Mike' ,'Jon', 'Susan' ,'Susan' ,'Joe'],\ 'Manager_Title':['Level1' ,'Level2' ,'Level3' ,"Level3", 'Level4'],\ 'Employee_Name':['Jon' ,'Susan' ,'Josh' ,'Joe' ,'Jimmy'],\ 'Employee_Title':["Level2" ,"Level3" ,"Level4" ,"Level4" ,"Level5"]}) The desired output would be: "Name": "Mike" "Title": "Level1" "Employees": [{ "Name": "Jon" "Title": "Level2" "Employees": [{ "Name": "Susan" "Title": "Level3" "Employees": [{ ... ... ... }] }] }] I know this isn't a code generating service but I've tried applying other similarly related answers and can't seem to apply those answers here. I also haven't worked with dictionaries that much (I'm more of an R person) so there's probably some noobishness to this question. I've more time than I should on this yet I'm sure someone here can do this in a few minutes. Other questions: * [pandas groupby to nested json](http://stackoverflow.com/questions/24374062/pandas-groupby-to-nested-json) * [Creating nested Json structure with multiple key values in Python from Json](http://stackoverflow.com/questions/23255512/creating-nested-json-structure-with-multiple-key-values-in-python-from-json) * [How to build a JSON file with nested records from a flat data table?](http://stackoverflow.com/questions/37713329/how-to-build-a-json-file-with-nested-records-from-a-flat-data-table?noredirect=1&lq=1) Thanks in advance! Answer: Consider filtering out dataframe by Level and converting dfs to dictionary with pandas [`to_dict()`](http://pandas.pydata.org/pandas- docs/stable/generated/pandas.DataFrame.to_dict.html) which are continually rolled into one list across levels. Below defined function walks from last level to first to roll up individual Employee Levels dictionaries. But first you should concatenate Manager and Employee _Name_ and _Title_ columns. import json import pandas as pd cdf = pd.concat([df[['Manager_Name', 'Manager_Title']].\ rename(index=str, columns={'Manager_Name':'Name', 'Manager_Title':'Title'}), df[['Employee_Name', 'Employee_Title']].\ rename(index=str, columns={'Employee_Name':'Name', 'Employee_Title':'Title'})]) cdf = cdf.drop_duplicates().reset_index(drop=True) print(cdf) # Name Title # 0 Mike Level1 # 1 Jon Level2 # 2 Susan Level3 # 3 Joe Level4 # 4 Josh Level4 # 5 Jimmy Level5 def jsondict(): inner = [''] for i in ['Level5', 'Level4', 'Level3', 'Level2']: if i == 'Level5': inner[0] = cdf[cdf['Title']==i].to_dict(orient='records') else: tmp = cdf[cdf['Title']==i].copy().reset_index(drop=True) if len(tmp) == 1: tmp['Employees'] = [inner[0]] else: for d in range(0,len(tmp)): tmp.ix[d, 'Employees'] = [inner[0]] lvltemp = tmp.to_dict(orient='records') inner[0] = lvltemp return(inner) jsondf = cdf[cdf['Title']=='Level1'].copy() jsondf['Employees'] = jsondict() jsondata = jsondf.to_json(orient='records') **Output** [{"Name":"Mike","Title":"Level1","Employees": [{"Name":"Jon","Title":"Level2","Employees": [{"Name":"Susan","Title":"Level3","Employees": [{"Name":"Joe","Title":"Level4","Employees": [{"Name":"Jimmy","Title":"Level5"}]}, {"Name":"Josh","Title":"Level4","Employees": [[{"Name":"Jimmy","Title":"Level5"}]]}]}]}]}] Or pretty printed [ { "Name": "Mike", "Title": "Level1", "Employees": [ { "Name": "Jon", "Title": "Level2", "Employees": [ { "Name": "Susan", "Title": "Level3", "Employees": [ { "Name": "Joe", "Title": "Level4", "Employees": [ { "Name": "Jimmy", "Title": "Level5" } ] }, { "Name": "Josh", "Title": "Level4", "Employees": [ [ { "Name": "Jimmy", "Title": "Level5" } ] ] } ] } ] } ] } ]
Changing words in a string using dictionary. python Question: I have the following message: msg = "Cowlishaw Street &amp; Athllon Drive, Greenway now free of obstruction." I want to change things such as "Drive" to "Dr" or "Street" to "St" expected_msg = "Cowlishaw St and Athllon Dr Greenway now free of obstruction" I also have a "conversion function" how do I check the list if such word is in it. and if so, change it with the "conversion" function. "conversion" is a dictionary that have word such as "Drive" act as a key and the value is "Dr" this is what I have done def convert_message(msg, conversion): msg = msg.translate({ord(i): None for i in ".,"}) tokens = msg.strip().split(" ") for x in msg: if x in keys (conversion): return " ".join(tokens) Answer: Isn't it simply: translations = {'Drive': 'Dr'} for index, token in enumerate(tokens): if token in conversion: tokens[index] = conversion[token] return ' '.join(tokens) However, this wouldn't work on sentences like `"Obstruction on Cowlishaw Street."` since the token now would be `Street.`. Perhaps you should use a regular expression with [`re.sub`](https://docs.python.org/3/library/re.html#re.sub): import re def convert_message(msg, conversion): def translate(match): word = match.group(0) if word in conversion: return conversion[word] return word return re.sub(r'\w+', translate, msg) Here the `re.sub` finds 1 or more consecutive (`+`) alphanumeric characters (`\w`); and for each such regular expression match calls the given function, giving the match as a parameter; the matched word can be retrieved with `match.group(0)`. The function should return a replacement for the given match - here, if the word is found in the dictionary we return that instead, otherwise the original is returned. Thus: >>> msg = "Cowlishaw Street &amp; Athllon Drive, Greenway now free of obstruction." >>> convert_message(msg, {'Drive': 'Dr', 'Street': 'St'}) 'Cowlishaw St &amp; Athllon Dr, Greenway now free of obstruction.' * * * As for the `&amp;`, on Python 3.4+ you should use [`html.unescape`](https://docs.python.org/3/library/html.html#html.unescape) to decode HTML entities: >>> import html >>> html.unescape('Cowlishaw Street &amp; Athllon Drive, Greenway now free of obstruction.') 'Cowlishaw Street & Athllon Drive, Greenway now free of obstruction.' This will take care of _all_ known HTML entities. For older python versions you can see [alternatives on this question](http://stackoverflow.com/questions/2087370/decode-html-entities-in- python-string). The regular expression does not match the `&` character; if you want to replace it too, we can use regular expression `\w+|.` which means: "any consecutive run of alphanumeric characters, or then any single character that is not in such a run": import re import html def convert_message(msg, conversion): msg = html.unescape(msg) def translate(match): word = match.group(0) if word in conversion: return conversion[word] return word return re.sub(r'\w+|.', translate, msg) Then you can do >>> msg = 'Cowlishaw Street &amp; Athllon Drive, Greenway now free of obstruction.' >>> convert_message(msg, {'Drive': 'Dr', '&': 'and', 'Street': 'St', '.': '', ',': ''}) 'Cowlishaw St and Athllon Dr Greenway now free of obstruction'
zipfile in Python produces not quite normal ZIP files Question: In my project set of files are created and packed to ZIP archive to be used at Android mobile phone. Android application is opening such ZIP files for reading initial data and then store results of its work to the same ZIPs. I have no access to source code of mentioned Android App and old script that generated zip files before (actually, I do not know how old ZIP files were created). But structure of ZIP archive is known and I have written new python script to make the same files. I was faced with the following problem: ZIP files produced by my script cannot be opened by Android App (error message about incorrect file structure arrears), but if I unpack all the contents and pack it back to new ZIP file with the same name by **WinZIP** , **7-Zip** or "**Send to -> Compressed (zipped) folder** " (in Windows 7) file is normally processed on the phone (this leads me to the conclusion that the problem is not in the Android Application). The code snippet for packing folder in ZIP was as follows # make zip try: with zipfile.ZipFile(prefix + '.zip', 'w') as zipf: for root, dirs, files in os.walk(prefix): for file in files: zipf.write(os.path.join(root, file)) # remove dir, that was packed shutil.rmtree(prefix) # Report about resulting print('File ' + prefix + '.zip was created') except: print('Unexpected error occurred while creating file ' + prefix + '.zip') After I noticed that files are not compressed I added compression option: zipfile.ZipFile(prefix + '.zip', 'w', zipfile.ZIP_DEFLATED) but this didn’t solve my problem and setting `True` value for `allowZip64` also didn’t change the situation. By the way a ZIP file produced with `zipfile.ZIP_DEFLATED` is about 5 kilobytes smaller than ZIP file produced by Windows and about 14 kilobytes smaller than 7-Zip’s result for the same archive content. At the same time all these ZIP files I can open for visual comparison by both 7-Zip and Windows Explorer. So I have three related questions: 1) What may cause such strange behavior of my script with `zipfile`? 2) How else can I influence on `zipfile`? 3) How to check ZIP file created with `zipfile` to find possible structure problems or make sure there are no problems? Of course, if I have to give up using `zipfile` I can use external archiver (e.g. 7-zip) for files packing, but I would like to find an elegant solution if it exists. **UPDATE:** In order to check content of ZIP file created with `zipfile` I made the following # make zip flist = [] try: with zipfile.ZipFile(prefix + '.zip', 'w', zipfile.ZIP_DEFLATED) as zipf: for root, dirs, files in os.walk(prefix): for file in files: zipf.write(os.path.join(root, file)) # Store item in the list flist.append(os.path.join(root, file).replace("\\","/")) # remove dir, that was packed shutil.rmtree(prefix) # Report about resulting print('File ' + prefix + '.zip was created') except: print('Unexpected error occurred while creating file ' + prefix + '.zip') # Check of zip with closing(zipfile.ZipFile(prefix + '.zip')) as zfile: for info in zfile.infolist(): print(info.filename + \ ' (extra = ' + str(info.extra) + \ '; compress_type = ' + ('ZIP_DEFLATED' if info.compress_type == zipfile.ZIP_DEFLATED else 'NOT ZIP_DEFLATED') + \ ')') # remove item from list if info.filename in flist: flist.remove(info.filename) else: print(info.filename + ' is unexpected item') print('Number of items that were missed:') print(len(flist)) And see the following results in the output: File en_US_00001.zip was created en_US_00001/en_US_00001_0001/en_US_00001_0001_big.png (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0001/en_US_00001_0001_info.xml (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0001/en_US_00001_0001_small.png (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0001/en_US_00001_0001_source.pkl (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0001/en_US_00001_0001_source.tex (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0001/en_US_00001_0001_user.png (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0002/en_US_00001_0002_big.png (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0002/en_US_00001_0002_info.xml (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0002/en_US_00001_0002_small.png (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0002/en_US_00001_0002_source.pkl (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0002/en_US_00001_0002_source.tex (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0002/en_US_00001_0002_user.png (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0003/en_US_00001_0003_big.png (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0003/en_US_00001_0003_info.xml (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0003/en_US_00001_0003_small.png (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0003/en_US_00001_0003_source.pkl (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0003/en_US_00001_0003_source.tex (extra = b''; compress_type = ZIP_DEFLATED) en_US_00001/en_US_00001_0003/en_US_00001_0003_user.png (extra = b''; compress_type = ZIP_DEFLATED) Number of items that were missed: 0 Thus, all that was written, then was read, but the question remains - if all that is necessary has been written? E.g. in comments Harold said about relative paths... perhaps, it is the key to the answer **UPDATE 2** When I replaced `zipfile` by using external **7-Zip** code # make zip subprocess.call(["7z.exe","a",prefix + ".zip", prefix]) shutil.rmtree(prefix) # Check of zip with closing(zipfile.ZipFile(prefix + '.zip')) as zfile: for info in zfile.infolist(): print(info.filename) print(' (extra = ' + str(info.extra) + '; compress_type = ' + str(info.compress_type) + ')') print('Values for compress_type:') print(str(zipfile.ZIP_DEFLATED) + ' = ZIP_DEFLATED') print(str(zipfile.ZIP_STORED) + ' = ZIP_STORED') produces the following result Creating archive en_US_00001.zip Compressing en_US_00001\en_US_00001_0001\en_US_00001_0001_big.png Compressing en_US_00001\en_US_00001_0001\en_US_00001_0001_info.xml Compressing en_US_00001\en_US_00001_0001\en_US_00001_0001_small.png Compressing en_US_00001\en_US_00001_0001\en_US_00001_0001_source.pkl Compressing en_US_00001\en_US_00001_0001\en_US_00001_0001_source.tex Compressing en_US_00001\en_US_00001_0001\en_US_00001_0001_user.png Compressing en_US_00001\en_US_00001_0002\en_US_00001_0002_big.png Compressing en_US_00001\en_US_00001_0002\en_US_00001_0002_info.xml Compressing en_US_00001\en_US_00001_0002\en_US_00001_0002_small.png Compressing en_US_00001\en_US_00001_0002\en_US_00001_0002_source.pkl Compressing en_US_00001\en_US_00001_0002\en_US_00001_0002_source.tex Compressing en_US_00001\en_US_00001_0002\en_US_00001_0002_user.png Compressing en_US_00001\en_US_00001_0003\en_US_00001_0003_big.png Compressing en_US_00001\en_US_00001_0003\en_US_00001_0003_info.xml Compressing en_US_00001\en_US_00001_0003\en_US_00001_0003_small.png Compressing en_US_00001\en_US_00001_0003\en_US_00001_0003_source.pkl Compressing en_US_00001\en_US_00001_0003\en_US_00001_0003_source.tex Compressing en_US_00001\en_US_00001_0003\en_US_00001_0003_user.png Everything is Ok en_US_00001/ (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00Faf\xd2Y\xf9\xd1\x01Faf\xd2Y\xf9\xd1\x01%\xc9c\xd2Y\xf9\xd1\x01'; compress_type = 0) en_US_00001/en_US_00001_0001/ (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\xbe(e\xd2Y\xf9\xd1\x01\xbe(e\xd2Y\xf9\xd1\x016\xf0c\xd2Y\xf9\xd1\x01'; compress_type = 0) en_US_00001/en_US_00001_0001/en_US_00001_0001_big.png (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00G\x17d\xd2Y\xf9\xd1\x01G\x17d\xd2Y\xf9\xd1\x01G\x17d\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0001/en_US_00001_0001_info.xml (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00X>d\xd2Y\xf9\xd1\x01X>d\xd2Y\xf9\xd1\x01X>d\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0001/en_US_00001_0001_small.png (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00z\x8cd\xd2Y\xf9\xd1\x01ied\xd2Y\xf9\xd1\x01ied\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0001/en_US_00001_0001_source.pkl (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\x8b\xb3d\xd2Y\xf9\xd1\x01\x8b\xb3d\xd2Y\xf9\xd1\x01\x8b\xb3d\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0001/en_US_00001_0001_source.tex (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\xad\x01e\xd2Y\xf9\xd1\x01\xad\x01e\xd2Y\xf9\xd1\x01\xad\x01e\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0001/en_US_00001_0001_user.png (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\xbe(e\xd2Y\xf9\xd1\x01\xbe(e\xd2Y\xf9\xd1\x01\xbe(e\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0002/ (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x005:f\xd2Y\xf9\xd1\x015:f\xd2Y\xf9\xd1\x01\xcfOe\xd2Y\xf9\xd1\x01'; compress_type = 0) en_US_00001/en_US_00001_0002/en_US_00001_0002_big.png (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\xe0ve\xd2Y\xf9\xd1\x01\xcfOe\xd2Y\xf9\xd1\x01\xcfOe\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0002/en_US_00001_0002_info.xml (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\xf1\x9de\xd2Y\xf9\xd1\x01\xe0ve\xd2Y\xf9\xd1\x01\xe0ve\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0002/en_US_00001_0002_small.png (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\x02\xc5e\xd2Y\xf9\xd1\x01\x02\xc5e\xd2Y\xf9\xd1\x01\x02\xc5e\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0002/en_US_00001_0002_source.pkl (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\x13\xece\xd2Y\xf9\xd1\x01\x13\xece\xd2Y\xf9\xd1\x01\x13\xece\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0002/en_US_00001_0002_source.tex (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00$\x13f\xd2Y\xf9\xd1\x01$\x13f\xd2Y\xf9\xd1\x01$\x13f\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0002/en_US_00001_0002_user.png (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x005:f\xd2Y\xf9\xd1\x015:f\xd2Y\xf9\xd1\x015:f\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0003/ (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\xdf\xc0g\xd2Y\xf9\xd1\x01\xdf\xc0g\xd2Y\xf9\xd1\x01Faf\xd2Y\xf9\xd1\x01'; compress_type = 0) en_US_00001/en_US_00001_0003/en_US_00001_0003_big.png (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00W\x88f\xd2Y\xf9\xd1\x01W\x88f\xd2Y\xf9\xd1\x01W\x88f\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0003/en_US_00001_0003_info.xml (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00h\xaff\xd2Y\xf9\xd1\x01h\xaff\xd2Y\xf9\xd1\x01h\xaff\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0003/en_US_00001_0003_small.png (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\x9b$g\xd2Y\xf9\xd1\x01y\xd6f\xd2Y\xf9\xd1\x01y\xd6f\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0003/en_US_00001_0003_source.pkl (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\xacKg\xd2Y\xf9\xd1\x01\xacKg\xd2Y\xf9\xd1\x01\xacKg\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0003/en_US_00001_0003_source.tex (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\xce\x99g\xd2Y\xf9\xd1\x01\xce\x99g\xd2Y\xf9\xd1\x01\xce\x99g\xd2Y\xf9\xd1\x01'; compress_type = 8) en_US_00001/en_US_00001_0003/en_US_00001_0003_user.png (extra = b'\n\x00 \x00\x00\x00\x00\x00\x01\x00\x18\x00\xdf\xc0g\xd2Y\xf9\xd1\x01\xdf\xc0g\xd2Y\xf9\xd1\x01\xdf\xc0g\xd2Y\xf9\xd1\x01'; compress_type = 8) Values for compress_type: 8 = ZIP_DEFLATED 0 = ZIP_STORED As I understand the most important findings are: * items with info for folders (e.g. `en_US_00001/`, `en_US_00001/en_US_00001_0001/`), that were not in the ZIP produced with my usage of `zipfile` * folders have `compress_type == ZIP_STORED`, while for files `compress_type == ZIP_DEFLATED` * `extra`s have different values (quite long strings were generated) Answer: Based on the differences listed in UPDATE 2 of Question and examples from [other question about zipfile](http://stackoverflow.com/questions/434641/how- do-i-set-permissions-attributes-on-a-file-in-a-zip-file-using-pythons-zip), I have tried the following code to add directories to ZIP file and check the result: # make zip try: with zipfile.ZipFile(prefix + '.zip', 'w', zipfile.ZIP_DEFLATED) as zipf: info = zipfile.ZipInfo(prefix+'\\') zipf.writestr(info, '') for root, dirs, files in os.walk(prefix): for d in dirs: info = zipfile.ZipInfo(os.path.join(root, d)+'\\') zipf.writestr(info, '') for file in files: zipf.write(os.path.join(root, file)) # remove dir, that was packed shutil.rmtree(prefix) # Report about resulting print('File ' + prefix + '.zip was created') except: print('Unexpected error occurred while creating file ' + prefix + '.zip') # Check zip content with closing(zipfile.ZipFile(prefix + '.zip')) as zfile: for info in zfile.infolist(): print(info.filename) print(' (extra = ' + str(info.extra) + '; compress_type = ' + str(info.compress_type) + ')') print('Values for compress_type:') print(str(zipfile.ZIP_DEFLATED) + ' = ZIP_DEFLATED') print(str(zipfile.ZIP_STORED) + ' = ZIP_STORED') Output is File en_US_00001.zip was created en_US_00001/ (extra = b''; compress_type = 0) en_US_00001/en_US_00001_0001/ (extra = b''; compress_type = 0) en_US_00001/en_US_00001_0002/ (extra = b''; compress_type = 0) en_US_00001/en_US_00001_0003/ (extra = b''; compress_type = 0) en_US_00001/en_US_00001_0001/en_US_00001_0001_big.png (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0001/en_US_00001_0001_info.xml (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0001/en_US_00001_0001_small.png (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0001/en_US_00001_0001_source.pkl (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0001/en_US_00001_0001_source.tex (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0001/en_US_00001_0001_user.png (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0002/en_US_00001_0002_big.png (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0002/en_US_00001_0002_info.xml (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0002/en_US_00001_0002_small.png (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0002/en_US_00001_0002_source.pkl (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0002/en_US_00001_0002_source.tex (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0002/en_US_00001_0002_user.png (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0003/en_US_00001_0003_big.png (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0003/en_US_00001_0003_info.xml (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0003/en_US_00001_0003_small.png (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0003/en_US_00001_0003_source.pkl (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0003/en_US_00001_0003_source.tex (extra = b''; compress_type = 8) en_US_00001/en_US_00001_0003/en_US_00001_0003_user.png (extra = b''; compress_type = 8) Values for compress_type: 8 = ZIP_DEFLATED 0 = ZIP_STORED Adding slash to directory names (`+'\\'` or `+'/'`) appeared mandatory. And the most important thing - now ZIP file is properly accepted by Android Application.
(centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running pybot --version,errors came out Question: (centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running `pybot --version`, errors came out as follows. I want to install the test environment of python 2.7.3 and robot framework 2.7.6 and paramiko-1.7.4 and pycrypto-2.6 > [root@localhost robotframework-2.7.6]# pybot --version > Traceback (most recent call last): > File "/usr/bin/pybot", line 4, in > from robot import run_cli > File "/usr/lib/python2.7/site-packages/robot/__init__.py", line 22, in > from robot.rebot import rebot, rebot_cli > File "/usr/lib/python2.7/site-packages/robot/rebot.py", line 268, in > from robot.conf import RebotSettings > File "/usr/lib/python2.7/site-packages/robot/conf/__init__.py", line 17, in > from .settings import RobotSettings, RebotSettings > File "/usr/lib/python2.7/site-packages/robot/conf/settings.py", line 17, in > from robot import utils > File "/usr/lib/python2.7/site-packages/robot/utils/__init__.py", line 23, > in > from .compress import compress_text > File "/usr/lib/python2.7/site-packages/robot/utils/compress.py", line 25, > in > import zlib > ImportError: No module named zlib Answer: Reasons could be any of the following: 1. Either the python files (at least one) have lost the formatting. Python is prone to formatting errors 2. At least one installation (python, Robo) doesn't have administrative privileges. 3. Environment variables (PATH, CLASSPATH, PYTHON PATH) are not set fine. 4. What does python --version print? If this throws errors, installation has issues.
Service and version displayed via nmap scan for simple python socket server Question: I've got a simple python socket server. Here's the code: import socket host = "0.0.0.0" # address to bind on. port = 8081 def listen_serv(): try: s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind((host,port)) s.listen(4) ... messages back and forth between the server and client ... if __name__ == "__main__": while True: listen_serv() When I run the python server locally and then scan with `nmap localhost` i see the open port 8081 with the service blackice-icecap running on it. A quick google search revealed that this is a firewall service that uses the port 8081 for a service called ice-cap remote. If I change the port to 12000 for example, I get another service called cce4x. A further scan with `nmap localhost -sV` returns the contents of the python script 1 service unrecognized despite returning data. If you know the service/version, please submit the following fingerprint at https://nmap.org/cgi-bin/submit.cgi?new-service : SF-Port8081-TCP:V=7.25BETA1%I=7%D=8/18%Time=57B58EE7%P=x86_64-pc-linux-gn SF:u%r(NULL,1A4,"\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\ SF:*\*\*\*\*\*\*\*\*\*\*\*\*\n\*\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x SF:20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\ SF:x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\*\n\*\x20\x20\x20\x20\x20\x SF:20Welcome\x20to\x20ScapeX\x20Mail\x20Server\x20\x20\x20\x20\*\n\*\x20\x SF:20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\ SF:x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 SF:\x20\x20\*\n\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\ SF:*\*\*\*\*\*\*\*\*\*\*\*\nHere\x20is\x20a\x20quiz\x20to\x20test\x20your\ SF:x20knowledge\x20of\x20hacking\.\.\.\n\n\nAnswer\x20correctly\x20and\x20 SF:we\x20will\x20reward\x20you\x20with\x20a\x20shell\x20:-\)\x20\nQuestion etc... etc... Is there a way I can customize the service and version descriptions that are displayed by nmap for my simple python server? Answer: Found a solution by sending the following line as the first message from the server c.send("HTTP/1.1 200 OK\r\nServer: Netscape-Enterprise/6.1\r\nDate: Fri, 19 Aug 2016 10:28:43 GMT\r\nContent-Type: text/html; charset=UTF-8\r\nConnection: close\r\nVary: Accept-Encoding\n\nContent-Length: 32092\r\n\n\n""")
Python I/O: Mixing The Datatypes Question: I'm writing a small script which merges a host of JSON files in one directory into a single file. Trouble is, I'm not entirely sure when my data is in which state. TypeErrors abound. Here's the script; import glob import json import codecs reader = codecs.getreader("utf-8") for file in glob.glob("/Users/me/Scripts/BagOfJson/*.json"): #Aha, as binary here with open(file, "rb") as infile: data = json.load(reader(infile)) #If I print(data) here, looks like good ol' JSON with open("test.json", "wb") as outfile: json.dump(data, outfile, sort_keys = True, indent = 2, ensure_ascii = False) #Crash This script results in the following error; TypeError: a bytes-like object is required, not 'str' Which is caused by the json.dump line. Naive me just deletes the 'b' in 'wb' for the outfile open. That doesn't do the trick. Maybe this is a lesson to me to use the shell for testing, and making use of the type() python function. Still, I'd love if someone can clear up for me the logic behind these data swaps. I wish it could all be strings... Answer: If this is Python 3, removing the `b` (binary mode) to open the file in _text mode_ should work just fine. You probably want to specify the encoding explicitly: with open("test.json", "w", encoding='utf8') as outfile: json.dump(data, outfile, sort_keys = True, indent = 2, ensure_ascii = False) rather than rely on a default. You shouldn't really use `codecs.getreader()`. The standard `open()` function can handle UTF-8 files just fine; just open the file in text mode and specify the encoding again: import glob import json for file in glob.glob("/Users/me/Scripts/BagOfJson/*.json"): with open(file, "r", encoding='utf8') as infile: data = json.load(infile) with open("test.json", "w", encoding='utf8') as outfile: json.dump(data, outfile, sort_keys = True, indent = 2, ensure_ascii = False) The above will still re-create `test.json` for each file in the `*.json` glob; you can't really put multiple JSON documents in the same file anyway (unless you specifically create [JSONLines files](http://jsonlines.org/), which you are not doing here because you are using `indent`). You'd need to write to a new filename and move the new back to the `file` filename if you wanted to re-format all JSON files in the glob.
Why does my Python code skips a while loop? Question: I'm building a guessing program for user's input in range 1 - 100. Why does it skips the second while loop where I check user input and forward it. It goes straight with number 1 import random nums_lasted = [] a = 0 while a < 101: nums_lasted.append(a) a += 1 secret_num = 1 while secret_num < 0 or secret_num > 100: try: secret_num = int(input("My number is")) except ValueError: print("No way that was an integer!") guess_pc = 50 min = 50 max = 101 while True: print("Is it", guess_pc,"?") if guess_pc == secret_num: print("Easy") break elif guess_pc > secret_num: max = guess_pc nums_lasted.append(guess_pc) nums_lasted1 = [i for i in nums_lasted if i < guess_pc] nums_lasted = nums_lasted1 elif guess_pc < secret_num: min = guess_pc nums_lasted.append(guess_pc) nums_lasted1 = [i for i in nums_lasted if i < guess_pc] nums_lasted = nums_lasted1 guess_pc = random.choice(nums_lasted) Answer: secret_num = 1 while secret_num < 0 or secret_num > 100: You set `secret_num` to `1`. The `while` will only run when `secret_num` is less than 0 or grater than 100, so it will never be executed.
Python: define a string variable pattern Question: I'm new to python. I have a program that reads from `str(sys.argv[1])`: myprogram.py "some date" # I'd like this in YYYYMMDD format. I.e: myprogram.py 20160806 if __name__ == '__main__': if str(sys.argv[1]): CTRL = str(sys.argv[1]) print "some stuff" sys.exit() I need "some date" in YYYYMMDD format. How could it be possible? I've googled variable mask, variable pattern and nothing came out. Thanks for your help and patience. * * * UPDATE: Fortunately all answers helped me! As the CTRL variable gaves me _2016-08-17 00:00:00_ format, I had to convert it to _20160817_. Here is the code that worked for me: if str(sys.argv[1]): CTRL_args = str(sys.argv[1]) try: CTRL = str(datetime.datetime.strptime(CTRL_args, "%Y%m%d")).strip().split(" ")[0].replace("-","").replace(" ","").replace(":","") # do some stuff except ValueError: print('Wrong format!') sys.exit() Answer: you need function **datetime.strptime** with mask **%Y%m%d** import sys from datetime import datetime if __name__ == '__main__': if str(sys.argv[1]): CTRL = str(sys.argv[1]) try: print datetime.strptime(CTRL, "%Y%m%d") except ValueError: print 'Wrong format' sys.exit() Output: $ python example.py 20160817 2016-08-17 00:00:00
Is it possible to extend Sphinx automodule to domains other than Python? Question: I'm looking to use Sphinx to document VHDL source code. Ideally I'd like to be able to take a VHDL type like this: type T_SDRAM_REQ is record req : STD_LOGIC; wr : STD_LOGIC; address : STD_LOGIC_VECTOR; wr_data : STD_LOGIC_VECTOR; wr_ben : STD_LOGIC_VECTOR; end record T_SDRAM_REQ; And use a RST directive something like this: .. vhdl:type:: sdram_pack.T_SDRAM_REQ is record :members: To extract all of the fields from the source code and RST-ify them for me. I've created a Sphinx domain, but it's dawning on me that this alone is not going to be enough - that's just a bunch of custom directives really. What I actually want is something akin to autoclass or automodule, which scans Python source files to generate directives. However as far as I can tell the Sphinx automodule functionality is just for Python. Is it possible to extend Sphinx to include similar functionality for other languages? In VHDL that would probably be called autopackage or autoentity, in C++ I guess autonamespace or a different autoclass? Could I somehow add a `vhdl:autopackage::` directive to my domain? From what I can tell from the Sphinx source I don't think the automodule directive is part of the Python domain. Answer: The answer to my own question is: yes. I've managed to do it, but it wasn't easy and the results are far from perfect. While the Sphinx Domain API has been set up with some generic base classes and Python-specific subclasses, the same can't be said of autodoc. Some of the Python autodoc classes can be used as base classes, but need a lot of overrides. The components of my autodoc system are: * A new directive `VHDLAutoDirective`, A subclass of `AutoDirective`, it maintains separate registries for documenters and special attrgetters, and trims "vhdl:auto" from the start of the directive name instead of just "auto". Like to original this calls object-specific documenter. * New documenters. One generic VHDL documenter base class `VHDLDocumenter`, then subclasses for each VHDL object. These documenters do all of the heavy lifting, taking options and content from the directives, and parsing VHDL to generate content. The key issue here is that Python autodoc relies on the modules it's documenting being installed. As Sphinx is written in Python it's very easy to `import` those installed modules and extract all required info that way, for example `__doc__` can be used to extract doctrings. For any other language you're going to have to first find the file containing the object you want to document, then parse it. To solve the first problem I added a `current-file` directive to specify a file for all subsequent auto directives, and a `file` option to my documenter base class to allow files to be specified per-directive. This is a bit clunky, as the file's paths are relative to the base of my repositiory and assumes that Sphinx is run there - it won't work if Sphinx is run in a sub-directory. For the second I wrote a rudimentary tokeniser and parser, before thinking it might be better to just copy the raw source code into a `code-block` directive - now I give the option of doing either. * An `add_autodocumenter` function in my domain, which registers a directive in the domain, then imports my autodoc module and calls an `add_documenter` function to register the documenter. The autodoc modules's setup function then calls `add_autodocumenter` on each object documenter with the auto directive. This is similar to what Python autodoc does, but the Python version registers its directives with the app rather than a domain. There's still a lot of room for improvement, but at least it serves as a proof of concept that it is possible to do this.
How to load a code source modified package in Python? Question: I have downloaded from github a package (scikit-lean) and put the code source in repository folder (Windows 7 64-bit). After modifying the code source, how can I load the package into the IPython notebook for testing ? 1. Should I copy paste the modified in sites-packages folder ? (what about the current original scikit-lean package) 2. Can I add the modified folder to the Python path ? 3. How to manage versioning when loading package in Python since both are same names ? (ie: the original package vs the package I modified) Sorry, it looks like beginner questions, but could not find anything how to start with Answer: If the code is in a file called file.py, you should just be able to do `import file` (if you're not in the right folder, just run `cd folder` in IPython first.)
Convert boolean to integer location in python Question: I have a boolean list, say: x = [True, False, False, True] How do you convert this list to integer locations, so that you get the result: y = [1, 4] ? Answer: You could use a list comprehension in combination with the [enumerate](https://docs.python.org/3.5/library/functions.html#enumerate) function, for example: >>> x = [True, False, False, True] >>> [index for index, element in enumerate(x, start=1) if element] [1, 4] Alternatively, if you're willing to use NumPy and get a result of type `numpy.ndarray`, there's a NumPy function that (almost) does what you need: [`numpy.where`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html). >>> import numpy >>> numpy.where(x) (array([0, 3]),) >>> numpy.where(x)[0] + 1 array([1, 4]) The strange `[0]` in the line above is there because `numpy.where` always returns its results in a tuple: one element of the tuple for each dimension of the input array. Since in this case the input array is one-dimensional, we don't really care about the outer tuple structure, so we use the `[0]` indexing operation to pull out the actual array we need. The `+ 1` is there to get from Python / NumPy's standard 0-based indexing to the 1-based indexing that it looks as though you want here. If you're working with large input data (and especially if the input list is already in the form of a NumPy array), the NumPy solution is likely to be significantly faster than the list comprehension.
Comparing file lines to strings in python, and inconsistencies Question: Important things this code is supposed to in order of execution: 1.Open and read the file "Goods" 2.Assign a random line from file "Goods" to the dictionary "goods" 3.Go through an if block that will assign a random value to the dictionary "cost" if goods[x] equals the string it's being compared to. 4.Print "goods", and "cost" 5.Repeat steps 2-4, 2 more times. from random import randint print("You search for things to buy in the market, and find:") f = open('Goods', 'r') #Opens file "Goods" lines = f.readlines() #Loads all lines from "Goods" goods = {1:"", 2:"", 3:""} cost = {1:"", 2:"", 3:""} x = 0 while x < 3: x += 1 goods[x] = lines[randint(0, 41)].strip() #Checks to see if goods[x] is equal to the string on the right, if it is, it assigns cost[x] to a random integer if goods[x] == "Lumber": cost[x] = randint(2, 3) elif goods[x] == "Rum": cost[x] == randint(3, 4) elif goods[x] == "Spices": cost[x] = randint(4, 5) elif goods[x] == "Fruit": cost[x] == randint(2, 4) elif goods[x] == "Opium": cost[x] == randint(1, 5) findings = '%s for %s gold.' %(goods[x], cost[x]) print(findings) The problem with this code is that the dictionary:"cost" does not get a value assigned from the if block when goods[x] equals: Rum, Fruit, or Opium. Could someone please tell me what's going on here? [The file "Goods"](http://i.stack.imgur.com/SJpYE.png) Answer: Your problem is that you are using two equal signs. `cost[x] == randint(3, 4)` you need to use just one. Hope this helps!
Error 401 and API V1.1 Python Question: Good morning, I am trying to download the people that is twitting certain words in an area by this python code: import sys import tweepy consumer_key="LMhbj3fywfKPNgjaPhOwQuFTY" consumer_secret=" LqMw9x9MTkYxc5oXKpfzvfbgF9vx3bleQHroih8wsMrIUX13nd" access_key="3128235413-OVL6wctnsx1SWMYAGa5vVZwDH5ul539w1kaQTyx" access_secret="fONdTRrD65ENIGK5m9ntpH48ixvyP2hfcJRqxJmdO78wC" auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_key, access_secret) api = tweepy.API(auth) class CustomStreamListener(tweepy.StreamListener): def on_status(self, status): if 'busco casa' in status.text.lower(): print (status.text) def on_error(self, status_code): print (sys.stderr, 'Encountered error with status code:', status_code) return True # Don't kill the stream def on_timeout(self): print (sys.stderr, 'Timeout...') return True # Don't kill the stream sapi = tweepy.streaming.Stream(auth, CustomStreamListener()) sapi.filter(locations=[-78.37,-0.20,-78.48,-0.18]) I am getting this error: Encountered error with status code: 401 I read in this link <https://dev.twitter.com/overview/api/response-codes> that the error is caused by: Authentication credentials were missing or incorrect. Also returned in other circumstances, for example, all calls to API v1 endpoints now return 401 (use API v1.1 instead). The authentication is there with updated keys. How should I use API v1.1? Thanks, Anita Answer: If it says that your credentials are incorrect, you might want to check your credentials: you need to remove the whitespaces in your consumer secret for your code to work. Also, I just tested your credentials (without whitespaces) and they are working. I can do whatever I want on behalf of your application. I suggest you very quickly go to <https://apps.twitter.com> and generate new ones. Never share your credentials. Especially online where everyone can see them.
Access partial results of a Celery task Question: I'm not a Python expert, however I'm trying to develop some long-running Celery-based tasks which I'm able to access their partial results instead of waiting for the tasks to finish. As you can see in the code below, given a multiplier, an initial and final range, the _worker_ creates a list of size _final_range - initial_range + 1_. from celery import Celery app = Celery('trackers', backend='amqp', broker='amqp://') @app.task def worker(value, initial_range, final_range): if initial_range < final_range list_values = [] for index in range(initial_frame, final_frame + 1): list_values.append(value * index) return list_values else return None So, instead of waiting for all _four workers_ to finish, I would like to access the to-be-returned values (_list_values_) before they are actually returned. from trackers import worker res_1 = worker.delay(3, 10, 10000000) res_2 = worker.delay(5, 01, 20000000) res_3 = worker.delay(7, 20, 50000000) res_4 = worker.delay(9, 55, 99999999) First of all, is it possible? If so, what sort of changes do I have to perform to make it work? Answer: You absolutely need to use an external storage such as SQL or Redis/Memcached because different tasks can be executed on different servers in common case. So in your example you should store list_values in some DB and update it during the loop.
Python+kivy+SQLite: How to set label initial value and how to update label text? Question: everyone, I want to use `kivy+Python` to display items from a `db` file. To this purpose I have asked a question before: [Python+kivy+SQLite: How to use them together](http://stackoverflow.com/questions/38939416/pythonkivysqlite-how-to- use-them-together) The App in the link contains one screen. It works very well. Today I have changed the App to **two** screens. The `first screen` has no special requirement but to lead the App to the `second screen`. In the `second screen` there is a `label` and a `button`. By clicking the `button` I want to have the `label text` changed. The `label text` refers to the `car type` from the `db` file that I have in the link above. For this two screens design, I have two questions: **Question 1:** How to update the `label text` when the `button` is clicked? I tried with two methods: **Method A:** `self.ids.label.text = str(text)` It shows me the error message: `AttributeError: 'super' object has no attribute '__getattr__'` I have googled a lot but still cannot understand. **Method B:** `self.ids["label"].text = str(text)` It shows me the error message: `KeyError: 'label'` I am confused because I have the `label` defined. **Question 2:** How to set the `label origin text` to one of the car type, so that everytime the second screen is shown, there is already a car type shown. Here is the code: (For the `db file` please refer to the link above.) # -*- coding: utf-8 -*- from kivy.app import App from kivy.base import runTouchApp from kivy.lang import Builder from kivy.properties import ListProperty from kivy.uix.screenmanager import ScreenManager, Screen, FadeTransition from kivy.uix.boxlayout import BoxLayout from kivy.uix.gridlayout import GridLayout from kivy.uix.floatlayout import FloatLayout from kivy.uix.label import Label from kivy.uix.widget import Widget from kivy.graphics import Rectangle from kivy.properties import NumericProperty, StringProperty, BooleanProperty, ListProperty from kivy.base import runTouchApp from kivy.clock import mainthread import sqlite3 import random class MyScreenManager(ScreenManager): def init(self, **kwargs): super().__init__(**kwargs) @mainthread # execute within next frame def delayed(): self.load_random_car() delayed() def load_random_car(self): conn = sqlite3.connect("C:\\test.db") cur = conn.cursor() cur.execute("SELECT * FROM Cars ORDER BY RANDOM() LIMIT 1;") currentAll = cur.fetchone() conn.close() currentAll = list(currentAll) # Change it from tuple to list print currentAll current = currentAll[1] print current self.ids.label.text = str(current) # Method A # self.ids["label"].text = str(current) # Method B class FirstScreen(Screen): pass class SecondScreen(Screen): pass root_widget = Builder.load_string(''' #:import FadeTransition kivy.uix.screenmanager.FadeTransition MyScreenManager: transition: FadeTransition() FirstScreen: SecondScreen: <FirstScreen>: name: 'first' BoxLayout: orientation: 'vertical' Label: text: "First Screen" font_size: 30 Button: text: 'Go to 2nd Screen' font_size: 30 on_release: app.root.current = 'second' <SecondScreen>: name: 'second' BoxLayout: orientation: 'vertical' Label: id: label text: 'click to get a new car' # I want its text changed when the button is clicked everytime. And its original text value should be one of the random car type from the db file. font_size: 30 Button: text: 'Click to get a random car' font_size: 30 on_release: app.root.load_random_car() ''') class ScreenManager(App): def build(self): return root_widget if __name__ == '__main__': ScreenManager().run() I have read a lot from internet, but I cannot understand them all. :-( Please help me to correct the code. Thank you so much! Answer: from kivy.app import App from kivy.lang import Builder from kivy.properties import StringProperty from kivy.uix.widget import Widget Builder.load_string(""" <ScreenX> BoxLayout: orientation: 'vertical' Label: text: root.label_text Button: text: 'Click Me' on_release: root.on_clicked_button() """) class ScreenX(Widget): label_text = StringProperty("Default Value") def on_clicked_button(self): self.label_text = "New Text!" class MyApp(App): def build(self): return ScreenX() MyApp().run() is typically how you would do this ...