input_text
stringlengths 1
40.1k
| target_text
stringlengths 1
29.4k
⌀ |
---|---|
Django Model use of keyword IN I am attempting to figure out how to use the IN keyword in a django model query I was attempting to replace: ````db = database connect() c = db cursor() c execute("SELECT MAX(Date) FROM `Requests` WHERE UserId = %%s AND VIN = %%s AND Success = 1 AND RptType in %s" % str(cls SuperReportTypes) (userID vin)) ```` With this: ````myrequests = Request objects filter(user=userID vin = vin report_type in cls SuperReportTypes) myrequests aggregate(Max('Date')) ```` I get a: ````SyntaxError: non-keyword arg after keyword arg (<console> line 1) ```` When I remove the ending `"report_type in cls SuperReportTypes"` the query functions properly I recognize that there is a way to do this after the query managing the result set but I was hoping to deal with this in such a way that MYSQL would do the execution | <a href="https://docs djangoproject com/en/dev/ref/models/querysets/#in" rel="nofollow">`field__in=seq`</a> |
Pandas - Using 'ffill' on values other than Na Is there a way to use `ffill` method on values that are not `NaN`? I have `NaN` in my dataframe but I have added these `NaN` using ````addNan = sample['colA'] replace(['A'] 'NaN') ```` So this is what my DataFrame `df` looks like ````ColA ColB ColC ColD B A A C NaN B A A C D D A NaN A A B ```` And I am trying to fill these `NaN` using `ffill` so they are populated by the last known value `fill = df fillna(method='ffill' inplace = True)` This does not make a difference also tried `Na` instead of `NaN` | I think you need first replace `NaN` to `np nan` because `NaN` is only text: ````import pandas as pd import numpy as np print (sample) ColA ColB ColC ColD 0 B A A C 1 A B A A 2 C D D A 3 A A A B sample['ColA'] = sample['ColA'] replace(['A'] np nan) print (sample) ColA ColB ColC ColD 0 B A A C 1 NaN B A A 2 C D D A 3 NaN A A B ```` If use `inplace = True` it return `None` but inplace fill values: ````sample fillna(method='ffill' inplace = True) #sample ffill(inplace = True) print (sample) ColA ColB ColC ColD 0 B A A C 1 B B A A 2 C D D A 3 C A A B ```` |
Add missing dates to pandas dataframe [I have found quite of bit of talk on similar issues but nothing on how I am approaching this] I have a CSV file I am parsing which could have multiple events on a given date Sometimes there are NO events on a date I take these events get a count by date and plot them However when I plot them my two series do not always match ````df = pd read_csv(inFile parse_dates=True) idx = pd date_range(df['simpleDate'] min() df['simpleDate'] max()) s = df groupby(['simpleDate']) size() fig ax = plt subplots() ax bar(idx to_pydatetime() s color='green') ```` In the above code <strong>idx</strong> becomes a range of say 30 dates 09-01-2013 to 09-30-2013 However <strong>S</strong> may only have 25 or 26 days because no events happened for a given date I then get an AssertionError as the sizes do not match What is the proper way to tackle this? Do I want to remove dates with no values from <strong>IDX</strong> or (which I would rather do) is add to the series the missing date with a count of 0 I would rather have a full graph of 30 days with 0 values If this approach is right any suggestions on how to get started? Do I need some sort of dynamic `reindex` function? Here is a snippet of <strong>S</strong> ( `df groupby(['simpleDate']) size()` ) notice no entries for 04 and 05 ````09-02-2013 2 09-03-2013 10 09-06-2013 5 09-07-2013 1 ```` | You could use `Series reindex`: ````import pandas as pd idx = pd date_range('09-01-2013' '09-30-2013') s = pd Series({'09-02-2013': 2 '09-03-2013': 10 '09-06-2013': 5 '09-07-2013': 1}) s index = pd DatetimeIndex(s index) s = s reindex(idx fill_value=0) print(s) ```` yields ````2013-09-01 0 2013-09-02 2 2013-09-03 10 2013-09-04 0 2013-09-05 0 2013-09-06 5 2013-09-07 1 2013-09-08 0 ```` |
Force string to be one liner Do anyone know an alternative ways to force a string into one liner? May be a python built-in method? or perhaps a better implementation (explain why)? For example: ````Hello World Foo Bar ```` Become ````Hello World Foo Bar ```` <h2>Implementation</h2> ````def strip_newline(s): return ' ' join(mystring splitlines()) strip_newline = lambda _: ' ' join(_ splitlines()) ```` <h2>Extra Note:</h2> For those who reminds me about character replacement of `\n` a line character into `` The drawback is you need to watch out for `\r` carriage return character too :-) Thus you need to do `mystring replace('\r' ' ') replace('\n' ' ')` | I think simple `replace` would be the fastest method: ````s = ''' 1 2 3 4 5 ''' print (s replace("\n" " ")) ```` |
PyTest conftest py incorrectly shows the directory os getcwd() in PyCharm When I use a fixture using a file conftest py the directory I get through the os getcwd() is not correct! I get C: \ Program Files \ Java \ jdk1 8 0_60 \ jre \ bin If I run the test from the command line everything works correctly Example: conftest py ````import os import pytest @pytest fixture() def curdir(): directory = os getcwd() print directory return directory ```` <strong>PyCharm</strong> I get C:\Program Files\Java\jdk1 8 0_60\jre\bin <a href="http://i stack imgur com/mlZYQ png" rel="nofollow">Incorrect directory</a> <strong>CMD</strong> I get C:\python\testing\scripts\example <a href="http://i stack imgur com/ZPRjL png" rel="nofollow">Correct directory</a> test_simple py ````# coding=utf-8 import json import os def test_simple(curdir): print curdir ```` Why is this happening? | Iâve never run PyCharm on Windows and I donât know Windows very well at all but hereâs my guess: PyCharm (a Java application) sets the JREâs binary directory as the current directory when it is run and any process spawned by PyCharm (such as the Python interpreter) simply inherits this So I wouldnât necessarily think that what youâre seeing is weird at all If you need your current directory to be something specific youâd probably best force it by using `os chdir()` |
Django model save(): Check if ANYTHING changed The question is straight forward: Is there a way to check whether any fields of an instance have been modified in model save() method? Or maybe other method clean()? The point is that you cannot explicitly name the fields to do the checking Is there any easy way to do it? | Way to get <them>all</them> field names: ````[field name for field in MODEL _meta fields] ```` Way to check field value by fields name as string: ````getattr(obj field_name) ```` so you can modify this <a href="http://stackoverflow com/questions/1355150/django-when-saving-how-can-you-check-if-a-field-has-changed">Django: When saving how can you check if a field has changed?</a> answer like this: ````def save(self *args **kw): if self pk is not None: orig = MyModel objects get(pk=self pk) field_names = [field name for field in MyModel _meta fields] fields_stats = {} for field_name in field_names: fields_stats[field_name] = getattr(orig field_name) != getattr(self field_name) super(MyModel self) save(*args **kw) ```` dictionary `field_stats` will be like ````{ 'field_name1': True 'field_name1': False } ```` Where `True` means field has changed and `False` mean field has not changed |
What do I import: file_name or class in Python? I have a file named <strong>schedule py</strong>: ````class SchedGen: """ Class creates a pseudo random Schedule With 3/4 of total credit point required for graduation""" def __init__(self nof_courses=40): random seed() self courses = {} self nof_courses = nof_courses for i in xrange(nof_courses): self courses[i] = college Course(i) self set_rand_cred() self set_rand_chance() self set_rand_preq_courses() def set_rand_cred(self): """ Set random credit to courses uniformly between 2 and 6""" temp_dict = self courses copy() ```` While importing content of schedule do I do `import schedule` like: ````import schedule ```` If that is correct how can I access the function set_rand_cred(self) from SchedGen class? | You would have to first make an instance (using the class's qualified name) then access that function as an attribute of the instance: ````import schedule s = schedule SchedGen() s set_rand_cred() ```` |
Theanos Installation error linux compilation error So I was trying to setup theano on my Linux14 04 machine Steps done so far : - Installed miniconda - installed dependencies - `conda install numpy scipy mkl <nose> <sphinx> <pydot-ng>` - Did not install the GPU drivers do not need the higher computation as of now - Tried installing theano with : `<sudo> pip install <--user> Theano[test doc] ` It exited with the following error : <a href="https://gist github com/crazysal/2c1990360a4ca0750d182e4217de998d" rel="nofollow">Theano terminal error gist</a> Been trying to solve the same max online references are related to upgrading pip : Ran this : `pip install --upgrade pip Requirement already up-to-date: pip in /miniconda2/lib/python2 7/site-packages` | Package error : Solved by installing : ````sudo apt-get install python-dev ```` Gave error for installing dependencies Theano[test doc] : Clean install with only : ````sudo pip install Theano ```` |
Using python can sockets on embedded linux bind misbehaves I am trying to interface to CAN drivers using python 3 4 running on embedded linux Theoretically there is a nice socketcan library available from python 3 3 but import can fails root@unit-901-100-sn003:~/francis# python3 Python 3 4 3 (default Oct 14 2015 21:23:51) import can ImportError: No module named 'can' Back to basics Actually I was working on this before somebody pointed out #!/usr/bin/env python3 ````import socket; can_device = "can0"; sock_instance = socket socket(socket PF_CAN socket SOCK_RAW socket CAN_RAW); can_index = socket if_nametoindex(can_device); addr = (socket AF_CAN can_index); sock_instance bind(addr); ```` root@unit-901-100-sn003:~/francis# /sockio py Traceback (most recent call last): File " /sockio py" line 10 in sock_instance bind((can_device can_index)); TypeError: function takes exactly 1 argument (2 given) This program fails as above I am obviously passing a single argument tuple or I have gone inanse? If I just created a simple socket ````si = socket socket(); si bind(('0 0 0 0' 8000)); ```` It executes so bind can recongise a tuple | The python-can library actually has a <a href="https://bitbucket org/hardbyte/python-can/src/10249b6e554466bae6f47df9f930c9f593f45002/can/interfaces/socketcan_ctypes py?at=default&fileviewer=file-view-default" rel="nofollow">ctypes</a> and <a href="https://bitbucket org/hardbyte/python-can/src/10249b6e554466bae6f47df9f930c9f593f45002/can/interfaces/socketcan_native py?at=default&fileviewer=file-view-default" rel="nofollow">native</a> python implementation that calls socketcan The binding is something like this: ````channel = "can0" sock = socket socket(socket PF_CAN socket SOCK_RAW socket CAN_RAW) sock bind((channel )) ```` |
Where are most of the television stations transmitters located? | Needham and Newton |
how to make a simple watch in wx python Could someone tell me please how to make a simple wxpython ```` import wx import time class MyFrame(wx Frame): """ We simply derive a new class of Frame """ def __init__(self parent title): wx Frame __init__(self parent title=title size=(200 100)) self control = wx TextCtrl(self style=wx TE_MULTILINE) self Show(True) app = wx App(False) frame = MyFrame(None 'Small editor') app MainLoop() ```` I am wondering how may I take datetime at the moment and show it in a frame | <blockquote> <them>"I am wondering how may I take datetime at the moment and show it in a frame"</them> </blockquote> use the `time` module to get the system time: ````import time time strftime('%d %m %Y') ```` As example how to access this module Now in the Python interpreter try `help(time)` and read up on the different timezone commands and formatting commands to get the value you need |
Override default _get_for_dict() for ndb Property i am having a hard time changing the default _get_for_dict() Method This is what my code looks at the moment: ````class ImageProperty(ndb BlobKeyProperty): def _get_for_dict(self entity): value = super(ImageProperty self) _get_for_dict(entity) if value: return images get_serving_url(value) else: return None ```` I am not that much into the concepts for overriding methods and having trouble with ndb iself Basically what I want to do: Store my Datastore Key as a BlobKeyProperty but when retrieving it as a dict I want to get the image serving url Thanks a lot | I have not tried this but I <them>think</them> that this would be better as a `_from_base_type` hook: ````class ImageProperty(ndb BlobKeyProperty): def _from_base_type(self value): return images get_serving_url(value) ```` If I understand the <a href="https://cloud google com/appengine/docs/python/ndb/subclassprop" rel="nofollow">documentation</a> correctly this API "stacks" so you do not need to call the `_from_base_type` on the super class (`BlobKeyProperty`) I guess `ndb` <a href="https://github com/GoogleCloudPlatform/datastore-ndb-python/blob/master/ndb/model py#L803" rel="nofollow">handles that for you</a> Personally I think this is a bit weird for an API when `super` seems like it would work just fine But that is how it is I guess |
Who consolidated their hold with the emergence of the Kingdom of Israel, and the kingdom of Judah? | Israelites |
Simple example of using wx TextCtrl and display data after button click in wxpython - new to wx I am learning python and trying out wxpython for UI development (do not have UI exp either) I have been able to create a frame with a panel a button and a text input box I would like to be able to enter text into the textbox and have the program do things to the text entered into the box after I click the button Can I get some guidence as to how to do this? for example let us say I want to display the text entered into the wx TextCtrl control on the panel How would i do that? ````import wx class ExamplePanel(wx Panel): def __init__(self parent): wx Panel __init__(self parent) self quote = wx StaticText(self label="Your quote :" pos=(20 30)) # A button self button =wx Button(self label="Save" pos=(200 325)) self lblname = wx StaticText(self label="Your name :" pos=(20 60)) self editname = wx TextCtrl(self value="Enter here your name" pos=(150 60) size=(140 -1)) app = wx App(False) frame = wx Frame(None) panel = ExamplePanel(frame) frame Show() app MainLoop() ```` | To do any GUI interactions you have to bind events to the widgets You basically tell the wxPython app which method (event handler) should be called when some event (button pressed) occurs I would also consider learning sizers and using them for your layouts I have changed your example a bit ````import wx class ExampleFrame(wx Frame): def __init__(self parent): wx Frame __init__(self parent) self panel = wx Panel(self) self quote = wx StaticText(self panel label="Your quote:") self result = wx StaticText(self panel label="") self result SetForegroundColour(wx RED) self button = wx Button(self panel label="Save") self lblname = wx StaticText(self panel label="Your name:") self editname = wx TextCtrl(self panel size=(140 -1)) # Set sizer for the frame so we can change frame size to match widgets self windowSizer = wx BoxSizer() self windowSizer Add(self panel 1 wx ALL | wx EXPAND) # Set sizer for the panel content self sizer = wx GridBagSizer(5 5) self sizer Add(self quote (0 0)) self sizer Add(self result (0 1)) self sizer Add(self lblname (1 0)) self sizer Add(self editname (1 1)) self sizer Add(self button (2 0) (1 2) flag=wx EXPAND) # Set simple sizer for a nice border self border = wx BoxSizer() self border Add(self sizer 1 wx ALL | wx EXPAND 5) # Use the sizers self panel SetSizerAndFit(self border) self SetSizerAndFit(self windowSizer) # Set event handlers self button Bind(wx EVT_BUTTON self OnButton) def OnButton(self e): self result SetLabel(self editname GetValue()) app = wx App(False) frame = ExampleFrame(None) frame Show() app MainLoop() ```` |
With what French city is Plymouth twinned? | Brest |
Get user group as string in django Need to know how to get the django user model group as a string in django i mean : ````request user groups get ```` Returns me an object but i need to get the exact string name of the group for example "administrators" or "sales" in order to compare it inside a conditional And what if i have many groups ? how do i get the specified index ? I tried : ````request user groups get(0) ```` or ````request user group get[0] ```` But it does not work Thanks for your help !! | To get all groups of a user you could do: `groups = request user groups all()` `groups` is a `ManyToManyField` to `User` model and ` all()` is how to access all m2m objects Please look at <a href="https://docs djangoproject com/en/1 9/topics/db/examples/many_to_many/" rel="nofollow">django doc</a> for more m2m details To get the group as string you could either do `group name` or `str(group)` |
Python: retrieving an enum variable from c++ using ctypes I have the following enum defined in C++ API: ````typedef enum RESULT_ENUM { SUCCESS ERR_INVALID_PORT_DEFINITION ERR_TOO_MANY_SAMPLES ERR_RECORDING_THREAD_ALREADY_RUNNING ERR_RECORDING_WITHOUT_APPLY_SETTINGS }RESULT; ```` I have a program in C++ that uses the API and creating: ````RESULT res; ```` Then it uses functions from the API to set values inside `res` for example: ````res = SetProfile(APP_PROFILE); res = SetDynamicImageFilter(filterType); res = StartCalibration(); ```` I want to create a Python program that does the same (literally) using ctypes How do I translate `RESULT res;` in a pythonic way? How do I make it contain the desired results from the functions? <strong>EDIT:</strong> Those functions return values that match the `RESULT` enumerators I want to get those enumerators in Python How can I do that? I am currently getting numbers corresponding to the enumerators values | The name to value mapping is not compiled into the binary All ctypes code that needs the value of a enum hard codes that value in the python If you wrap the C++ code in a python extension you can choose to expose the enum values as python symbols of your module If you control the C++ implementation you are calling you could add a helper fucntion to return the value of the enum you need |
Facebook-sdk python module has no attribute GraphAPI After installing the facebook-sdk module <a href="https://facebook-sdk readthedocs org/en/latest/install html" rel="nofollow">here</a> and looking at other solutions here and elsewhere I keep getting this error: ````Traceback (most recent call last): File "facebook py" line 1 in <module> import facebook File "/home/facebook/facebook py" line 3 in <module> graph = facebook GraphAPI(access_token='ACCESS TOKEN HERE') AttributeError: 'module' object has no attribute 'GraphAPI' ```` For this very simple python code to authenticate ````import facebook graph = facebook GraphAPI(access_token='ACCESS TOKEN HERE') print 'Workinnnn' ```` It says my module is installed and up to date and I have installed both within (as suggested) a virtualenv and outside and still get the error I also definitely HAVE the module in usr/local/lib/python etc dist packages and it contains the class GraphAPI Has anyone got a suggestion either: 1) What might be going wrong? 2) What to try to fix it? UNinstall something? 3) If there is another way other than pip to install the module I do not normally use pip (but definitely have it installed and installed facebook-sdk from it) so if there is another way then I would like to try Cheers :/ | Solution = do not name your script the same as your module I am an idiot sigh |
Multiple File upload and send in email - AppEngine & Blobstore Hi I am new at this so please be gentle! I have a HTML web form that I need to allow for multiple file upload using Python/Django/AppEngine The files should be stored within the Blobstore when uploaded and then sent as attachments within an email which will be sent via the system Could someone provide a simple code example of how to do this multiple upload and sending in an email please or even point me in the right direction please? The sending of an email is easy but it is the uploading and attaching where I am not so sure Many thanks | - <a href="http://blog notdot net/2010/04/Implementing-a-dropbox-service-with-the-Blobstore-API-part-3-Multiple-upload-support" rel="nofollow">Uploading multiple files to the blobstore</a> - <a href="http://code google com/appengine/docs/python/mail/attachments html" rel="nofollow">Sending mail with attachments</a> |
Concatenating and sorting thousands of CSV files I have thousands of csv files in disk Each of them with a size of approximately ~10MB (~10K columns) Most of these columns hold real (float) values I would like to create a dataframe by concatenating these files Once I have this dataframe I would like to sort its entries by the first two columns I currently have the following: ````my_dfs = list() for ix file in enumerate(p_files): my_dfs append( pd read_csv(p_files[ix] sep=':' dtype={'c1' : np object_ 'c2' : np object_})) print("Concatenating files ") df_merged= pd concat(my_dfs) print("Sorting the result by the first two columns ") df_merged = df_merged sort(['videoID' 'frameID'] ascending=[1 1]) print("Saving it to disk ") df_merged to_csv(p_output sep=':' index=False) ```` But this requires so much memory that my process is killed before getting the result (in the logs I see that the process is killed when its using around 10GB of memory) I am trying to figure out where exactly it fails but I am still unable to do it (although I hope to log the stdout soon) Is there a better way to do this in Pandas? | Loading them into a database is easy flexible for making changes later on and takes advantage of all the optimization work that goes into databases Once you have loaded it if you wanted to get an iterable of the data you could run the following query and be done: ````SELECT * FROM my_table ORDER BY column1 column2 ```` I am pretty sure there are more direct ways to load into sqlite3 within sqlite3 but if you do not want to do it directly in sqlite you can use python to do load in the data taking advantage of csv reader as an iterator so you only load a minimal amount into memory as in: ````import csv import sqlite3 conn = sqlite3 Connection(dbpath) c = conn cursor() for path in paths: with open(path) as f: reader = csv reader(f) c executemany("INSERT INTO mytable VALUES (? ? ?)""" reader) ```` that way you do not have to ever load too much into memory and you can take advantage of sqlite After that (if you wanted to again do it in Python) you could do: ````import csv import sqlite3 conn = sqlite3 Connection(dbpath) c = conn cursor() with open(outpath) as f: writer = csv writer writer writerows(c execute("SELECT * FROM mytable ORDER BY col1 col2")) ```` |
Insert html string into BeautifulSoup object I am trying to insert an html string into a BeautifulSoup object If I insert it directly bs4 sanitizes the html If take the html string and create a soup from it and insert that I have problems with using the `find` function <a href="http://stackoverflow com/questions/15455148/find-after-replacewith-does not-work-using-beautifulsoup">This post thread</a> on SO suggests that inserting BeautifulSoup objects can cause problems I am using the solution from that post and recreating the soup each time I do an insert But surely there is a better way to insert an html string into a soup <strong>EDIT: I will add some code as an example of what the problem is</strong> ````from bs4 import BeautifulSoup mainSoup = BeautifulSoup(""" <html> <div class='first'></div> <div class='second'></div> </html> """) extraSoup = BeautifulSoup('<span class="first-content"></span>') tag = mainSoup find(class_='first') tag insert(1 extraSoup) print mainSoup find(class_='second') # prints None ```` | Simplest way if you already have an html string is to insert another BeautifulSoup object ````from bs4 import BeautifulSoup doc = ''' <div> test1 </div> ''' soup = BeautifulSoup(doc 'html parser') soup div append(BeautifulSoup('<div>insert1</div>' 'html parser')) print soup prettify() ```` Output: ````<div> test1 <div> insert1 </div> </div> ```` <h1>Update 1</h1> How about this? Idea is to use BeautifulSoup to generate the right AST node (span tag) Looks like this avoids the "None" problem ````import bs4 from bs4 import BeautifulSoup mainSoup = BeautifulSoup(""" <html> <div class='first'></div> <div class='second'></div> </html> """ 'html parser') extraSoup = BeautifulSoup('<span class="first-content"></span>' 'html parser') tag = mainSoup find(class_='first') tag insert(1 extraSoup span) print mainSoup find(class_='second') ```` Output: ````<div class="second"></div> ```` |
What do Mexicans call Mexico City? | Chilangolandia |
Most concise way to create a python dictionary from local variables In Objective-C you can use the `NSDictionaryOfVariableBindings` macro to create a dictionary like this ````NSString *foo = @"bar" NSString *flip = @"rar" NSDictionary *d = NSDictionaryOfVariableBindings(foo flip) // d > { 'foo' => 'bar' 'flip' => 'rar' } ```` Is there something similar in python? I often find myself writing code like this ````d = {'foo': foo 'flip': flip} # or d = dict(foo=foo flip=flip) ```` Is there a shortcut to do something like this? ````d = dict(foo flip) # > {'foo': 'bar' 'flip': 'rar'} ```` | No this shortcut in python does not exist But perhaps this is what you need: ````>>> def test(): x = 42 y = 43 return locals() >>> test() {'y': 43 'x': 42} ```` Also python provides `globals()` and `vars()` build-in functions for such things See the <a href="http://docs python org/library/functions html#locals" rel="nofollow">doc</a> |
The card games included what genre | null |
shutil move sometimes changes file properties on mac On Mac there is the `hide extension` property on files You can either hide/show the extension by right click > Get Info > Name & Extension > Hide Extension checkbox I have a file whose file extension is shown however when I use `shutil move()` to move it elsewhere the resulting file has its extension hidden How do I preserve the original file extension property? | Well I have just looked into the code of shutil from 2 7 6 and there is nothing in it what would copy extended attributes Also there is <a href="http://bugs python org/issue14082" rel="nofollow">http://bugs python org/issue14082</a> So I guess you should either use a fixed version or handle it manually Likely `shutil` from Python-3 3 1 installed on localhost actually copies extended attributes if `os` package contains `listxattr` function |
django-tables2 - How to assign total item count with non-queryset data pagination? I am fetching data from an API with "requests" library and I want to show 10 item per page in html table So I am fetching 10 item from API with total object count (assume there are 1000 items) When I push the data to html table pagination not creating because I do not know how to assign total item count to table ````# tables py class CustomerTable(tables Table): id = tables Column() name = tables LinkColumn('customer:edit' kwargs={'id': A('id')}) class Meta: order_by = 'name' # views py # content of a view data = {'total_count': 1000 "objects": [{'id':1 'name': 'foo'} {'id':2 'name': 'bar'} {'id':3 'name': 'baz'}]} table = CustomerTable(data['objects']) table paginate(page=self request GET get('page' 1) per_page=1) self render_to_response({'table': table}) ```` Question: How to assign total item count(`data['total_count']`) to the table for pagination? | From the documentation <a href="http://django-tables2 readthedocs org/en/latest/#populating-a-table-with-data" rel="nofollow">here</a>: <blockquote> Tables are compatible with a range of input data structures If youâve seen the tutorial youâll have seen a queryset being used however any iterable that supports len() and contains items that expose key-based accessed to column values is fine </blockquote> So you can create your own wrapper class around your API calls which requests the length of your data when len() is called Something like this might work although you would probably want to optimize it to only access the API and return just the items needed not the entire data set as is suggested below ````class ApiDataset(object): def __init__(self api_addr): self http_api_addr = api_addr self data = None def cache_data(self): # Access API and cache returned data on object if self data is None: self data = get_data_from_api() def __iter__(self): self cache_results() for item in self data['objects']: yield item def __len__(self): self cache_results() return self data['total_count'] ```` Using this setup you would pass in an APIDataset instance to the django-tables2 Table constructor |
Quite pythonic but not convincing as pandas style I have a dataframe where each series if filled with 0 and 1 as follows: ````flagdf=pd DataFrame({'a':[1 0 1 0 1 0 0 1] 'b':[0 0 1 0 1 0 1 0]}) ```` Now depending on some analysis I have done I need to change some 0s to 1s So the final dataframe will be: ````final=pd DataFrame({'a':[1 1 1 0 1 1 1 1] 'b':[1 1 1 0 1 1 1 1]}) ```` The results of the analysis which shows which 0s have to be changed are stored in a second dataframe built with a multi-index: ```` first last a 1 1 1 5 5 6 b 0 0 1 5 5 5 7 7 7 ```` For each 'a' and 'b' I have the first and the last indexes of the 0s I need to change First question: The second index in the multi-index dataframe is equal to the series 'first' I was initially trying to use it directly but I found it easier to handle two series rather than an index and a series Am I missing something? Here is the code to do the job: ````def change_one_value_one_column(flagdf col_name event): flagdf[col_name] iloc[event]=1 def change_val_column(col_name tochange flagdf): col_tochange=tochange ix[col_name] tomod=col_tochange[['first' 'last']] values iter_tomod=[xrange(el[0] el[1]+1) for el in tomod] [change_one_value_one_column(flagdf col_name event) for iterel in iter_tomod for event in iterel] [change_val_colmun(col_name) for col_name in flagdf columns] ```` Second question: I genuinely think that a list comprehension is always good but in cases like that when I write a function specifically for a list comprehension I have some doubt Is it truly the best thing to do? Third question: I think that the code is quite pythonic but I am not proud of that because of the last list comprehension which is running over the series of the dataframe: using the method apply would look better to my eyes (but I am not sure how to do it) Nontheless is there any real reason (apart from elegance) I should work to do the changes? | To answer the part about exhausting an iterator I think you have a few pythonic choices (all of which I prefer over a list comprehension): ````# the easiest and most readable for col_name in flagdf columns: change_val_column(col_name) # consume/exhaust an iterator using built-in any (assuming each call returns None) any(change_val_colmun(col_name) for col_name in flagdf columns) # use itertools' consume recipe consume(change_val_colmun(col_name) for col_name in flagdf columns) ```` <them>See <a href="https://docs python org/2/library/itertools html#recipes" rel="nofollow">consume recipe from itertools</a> </them> However when doing this kind of thing in numpy/pandas you should be asking yourself "can I vertorize / use indexing here?" If you can your code will usually be both faster and more readable I <them>think</them> in this case you will be able to remove one level of loops by doing something like: ````def change_val_column(col_name tochange flagdf): col_tochange = tochange ix[col_name] # Note: you are accessing index not column here?? tomod = col_tochange[['first' 'last']] values for i j in tomod: flag_df loc[i:j col_name] = 1 ```` You <strong>may</strong> even be able to remove the for loop but it is not obvious how / what the intention is here |
A simple looping command In Python So I recently got into programming in python and decided to make a simple code which ran some simple maths e g calculating the missing angle in a triangle and other simple things like that After I made the program and a few others I thought that maybe other people I know could use this so I decided to try and make this as simple as possible The code can be found below: ````a = int(input("What is one of the angles?")) b = int(input("What is the other angle in the triangle?")) c = (a b) d = 180 f = int(180 - c) print(f) ```` The code itself does work but the only problem is that if you have more than 1 question it becomes tedious and a rather cumbersome task to constantly load up Python and hit F5 so my idea was to have it loop an infinite number of times until you decided to close down the program Every time I tried searching for a way to do this all of the while True: statements were for bigger and more complicated pieces of code and with this being maybe my fifth or tenth piece of code I could not understand a few of the coding for it I would appreciate any help or advice for this subject as it would make my day if anyone's willing to help | `while True` is good enough for this script why abandon it? ````while True: a = int(input("What is one of the angles?")) b = int(input("What is the other angle in the triangle?")) c = (a b) d = 180 f = int(180 - c) print(f) ```` |
How to Catch a 503 Error and retry request I am using `grequests` to make about 10 000 calls but some of these calls return as `503` This problem goes away if I do not queue all 10 000 calls at once Breaking it into groups of 1000 seems to do the trick However I was wondering if there is a way to catch this `503` error and just retry the `request` This is how I am calling and combining the threads: ````import grequests rs = (grequests get(you headers=header) for you in urls) response = grequests map(rs) ```` I know this is really vague but I do not even know if this is possible using `grequests` I naivley tried ````import grequests rs = (grequests get(you headers=header) for you in urls) time sleep(1) response = grequests map(rs) ```` But this does nothing to slow it down | Maybe you can try using event hooks to catch the error and re-launch the requests <a href="http://docs python-requests org/en/master/user/advanced/#event-hooks" rel="nofollow">http://docs python-requests org/en/master/user/advanced/#event-hooks</a> ```` import grequests def response_handler(response): if response status_code == '503': print('error 503') rs = (grequests get(you headers=header hooks = dict('response' : response_handler)) for you in urls) response = grequests map(rs)` ```` |
Keyboard Interrup not caught while using sniff function I am trying to empty the list whenever there is a keyboard interrupt but this does not work Below is my code ````while 1: try: n=0 while n<10: pkt=sniff(prn=lambda x:x sprintf("{Ether : %ether src% -> %ether dst%}") timeout=1 store=1) buffpkt+=pkt[:] ##Store packets in a list buffer n+=1 self _tempbuffstore(buffpkt) except KeyboardInterrupt: buffpkt=[] ##Flush the list asynchronously "THIS IS THE MAIN REQUIREMENT FOR INTERRUPT" raise ```` I tried doing all possible things but this interrupt handler is not working Any help? | It is looks like you call to 'sniff' which I cannot reproduce is blocking It is very common for network-related methods You should put your sniff method in a separate thread and then it is possible to handle your exception in the main loop (and you will also have to deal with stopping the thread on exceptions) |
Unexpectedly large array created with numpy ones when setting names I Am trying to get a `10 x 8 array` using the code below with numpy ````import numpy as np columns = ["Port Wt" "Bench Wt" "Port Retn" "Bench Retn" "Attrib" "Select" "Inter" "Total"] a = np ones([10 len(columns)] dtype={"names":columns "formats":["f8"]*len(columns)}) ```` I am new to numpy and I get unexpected behaviour - I am getting a `10 x 8 x 8 grid` instead I have tried ````a dtype names = columns ```` and get a `ValueError: there are no fields defined` What am I doing wrong and how would I get a 10 x 8 grid as desired with the names? Thanks | Your code does produce a `10 x 8` array i e `a shape == (10 8)` However each element in the array has 8 fields adding to a total of `10 x 8 x 8` fields So what you probably want is an array with shape `(10 )` and 8 fields per element: ````a = np ones((10 ) dtype={"names":columns "formats":["f8"]*len(columns)}) ```` |
Which species did Darwin compare with the human struggle to survive? | plants |
Accessing elements on the next page in selenium python I am trying to write a program in Python3 5 using Selenium to automate downloading process in zbigz com using Firefox webdriver My code is as follows: ````import time from selenium import webdriver from selenium common exceptions import TimeoutException #magnet link for the purpose of testing mag = "magnet:?xt=urn:btih:86259d1c8d9dfbe15b6290268231e68d414fed23&dn=The Big Bang Theory S09E21 HDTV x264-LOL%5Bettv%5D&tr=udp%3A%2F%2Ftracker openbittorrent com%3A80&tr=udp%3A%2F%2Fopen demonii com%3A1337&tr=udp%3A%2F%2Ftracker coppersurfer tk%3A6969&tr=udp%3A%2F%2Fexodus desync com%3A6969" def startdriver(): #starting firefox driver and waiting for 100 seconds driver = webdriver Firefox() driver implicitly_wait(100) return driver def download(driver url mg): #opening up firefox at url = www zbigz com driver get(url) try: #accessing the required elements on the first page that opens up entry_box = driver find_element_by_xpath(' //*[@id=\'text-link-input\']') go_button = driver find_element_by_id('go-btn') #entering magnet link entry_box clear() entry_box send_keys(mg) #clicking on the 'Go' button go_button click() #accessing the free option free_button = driver find_element_by_id('cloud-free-btn') #clicking on the free option free_button click() #now comes the next page ('www zbigz com/myfiles') where everything goes wrong while driver find_elements_by_tag_name('html') is None: #waiting for the page to load continue #this button is what I need to click cloud_btn = driver find_elements_by_xpath(' //*[@id=\'86259d1c8d9dfbe15b6290268231e68d414fed23\']/div[1]') #allowing some time so that the download gets cached fully time sleep(60) #clicking cloud_btn click() except TimeoutException: print('Page could not be loaded Get a better connection!') if __name__=='__main__': #starting driver and downloading d = startdriver() download(d zbigz mag) time sleep(30) d quit() ```` However I cannot access the button on the next page When i run this code this is the error i get: <blockquote> Traceback (most recent call last): File "G:/Python/PyCharm Projects/TorrentDownloader py" line 88 in download(d zbigz mag) File "G:/Python/PyCharm Projects/TorrentDownloader py" line 80 in download cloud_btn click() AttributeError: 'list' object has no attribute 'click' </blockquote> I beleive that I am not able to access elements on teh next page And since the for submission method is POST I cannot use `driver get(zbigz+'myfiles')` So please suggest a way to access the elements on the page that follows | <a href="http://selenium-python readthedocs org/api html#selenium webdriver remote webdriver WebDriver find_elements_by_xpath" rel="nofollow">`WebDriver find_elements_by_xpath`</a> returns a list of elements If you want only one element use <a href="http://selenium-python readthedocs org/api html#selenium webdriver remote webdriver WebDriver find_element_by_xpath" rel="nofollow">`WebDriver find_element_by_xpath`</a> (no `s`) instead: ````cloud_btn = driver find_element_by_xpath(" //*[@id='86259d1c8d9dfbe15b6290268231e68d414fed23']/div[1]") ```` BY THE WAY using `" "` string literal you do not need to escape `'` inside |
What crops was the New Deal Program made for? | cotton and tobacco |
Pipeline to create Voronoi Meshes I would like to implement a Maya plugin (this question is independent from Maya) to create 3D Voronoi patterns Something like <a href="http://i stack imgur com/8RYgU jpg" rel="nofollow"><img src="http://i stack imgur com/8RYgU jpg" alt="enter image description here"></a> I just know that I have to start from point sampling (I implemented the adaptive poisson sampling algorithm described in <a href="http://link springer com/article/10 1007%2Fs11432-011-4322-8#/page-1" rel="nofollow">this paper</a>) I thought that from those points I should create the 3D wire of the mesh applying Voronoi but the result was something different from what I expected Here are a few example of what I get handling the result i get from scipy spatial Voronoi like this (as suggested <a href="http://stackoverflow com/questions/23658776/voronoi-diagram-edges-how-to-get-edges-in-the-form-point1-point2-from-a-scip">here</a>): ````vor = Voronoi(points) for vpair in vor ridge_vertices: for i in range(len(vpair) - 1): if all(x >= 0 for x in vpair): v0 = vor vertices[vpair[i]] v1 = vor vertices[vpair[i+1]] create_line(v0 tolist() v1 tolist()) ```` The grey vertices are the sampled points (the original shape was a simple sphere): <a href="http://i stack imgur com/gz8hD png" rel="nofollow"><img src="http://i stack imgur com/gz8hD png" alt="enter image description here"></a> Here is a more complex shape (an arm) <a href="http://i stack imgur com/mCkPm png" rel="nofollow"><img src="http://i stack imgur com/mCkPm png" alt="enter image description here"></a> I am missing something? Can anyone suggest the proper pipeline and algorithms I have to implement to create such patterns? | I saw your question since you posted it but didnât have a real answer for you however as I see you still didnât get any response Iâll at least write down some ideas from me Unfortunately itâs still not a full solution for your problem For me it seems youâre mixing few separate problems in this question so it would help to break it down to few pieces: <h1>Voronoi diagram:</h1> The diagram is by definition infinite so when you draw it directly you should expect a similar mess youâve got on your second image so this seems fine I donât know how the SciPy does that but the implementation Iâve used flagged some edge ends as âinfiniteâ and provided me the edges direction so I could clip it at some distance by myself Youâll need to check the exact data you get from SciPy In the 3D world youâll almost always want to remove such infinite areas to get any meaningful rendering or at least remove the area that contains your camera <h1>Points generation:</h1> The Poisson disc is fine as some sample data or for early R&D but itâs also the most boring one :) Youâll need more ways to generate input points I tried to imagine the input needed for your ball-like example and I came up with something like this: - Create two spheres of points with the same center but different radius When you create a Voronoi diagram out of it and remove infinite areas you should end up with something like a football ball If you created both spheres randomly youâll get very irregular boundaries of the âballâ but if you scale the points of one sphere to use for the 2nd one you should get a regular mesh similar to ball You can also use similar points but add some random offset to control the level of surface irregularity <old start="2"> - Get your computed diagram and for each edge create few points along this edge - this will give you small areas building up the edges of bigger areas Play with random offsets again Try to ignore edges that does not touch any infinite region to get result similar to your image - Get the points from both stages and compute the diagram once more <h1>Mesh generation:</h1> Up to now it didnât look like your target images In fact it may be really hard to do it with production quality (for a Maya plugin) but I see some tricks that may help What I would try first would be to get all my edges and extrude some circle along them You may modulate circle size to make it slightly bigger at the ends Then do Boolean âORâ between all those meshes and some Mesh Smooth at the end This way may give you similar results but youâll need to be careful at mesh intersections they can get ugly and need some special treatment |
socat reverse she will run python script I am new to using socat and was wondering if this was even possible So my scenario is I am working on my OSCP and have a machine that I can get a reverse she will back on let us say port 8888 I am trying to keep from writing to disk on this windows xp machine Is there anyway I can set the socat connection so when the machine connects back to me it will be able run my python enumeration script collect the output and write the output back to my local disk? Thank you | Not sure about your scenario details but socat has the EXEC address type: ```` EXEC:<command-line> Forks a sub process that establishes communication with its parent process and invokes the specified program with execvp() <command-line> is a simple command with argu- ments separated by single spaces If the program name contains a '/' the part after the last '/' is taken as ARGV[0] If the program name is a relative path the execvp() semantics for finding the program via $PATH apply After successful program start socat writes data to stdin of the process and reads from its stdout using a UNIX domain socket generated by socketpair() per default (example) Option groups: FD SOCKET EXEC FORK TERMIOS Useful options: path fdin fdout chroot su su-d nofork pty stderr ctty setsid pipes login sigint sigquit See also: SYSTEM ```` so in your case if I understand it you would want to do something like: <blockquote> socat TCP-LISTEN:8888 EXEC:/path/to/your/python/script </blockquote> |
Wildcards in column name for MySQL I am trying to select multiple columns but not all of the columns from the database All of the columns I want to select are going to start with "word" So in pseudocode I would like to do this: ````SELECT "word%" from searchterms where onstate = 1; ```` More or less I am not finding any documentation on how to do this - is it possible in MySQL? Basically I am trying to store a list of words in a single row with an identifier and I want to associate all of the words with that identifier when I pull the records All of the words are going to be joined as a string and passed to another function in an array/dictionary with their identifier I am trying to make as FEW database calls as possible to keep speedy code <strong>Ok here is another question for you guys:</strong> There are going to be a variable number of columns with the name "word" in them Would it be faster to do a separate database call for each row with a generated Python query per row or would it be faster to simply SELECT * and only use the columns I needed? Is it possible to say SELECT * NOT XYZ? | No you cannot dynamically produce the list of columns to be selected It will have to be hardcoded in your final query Your current query would produce a result set with one column and the value of that column would be the string `"word%"` in all rows that satisfy the condition |
PyQt - how to read documentation I need some advise about how to read PyQt's documentation Because on my own I can hardly figure anything out I am a programming newbie so sorry if my question is confusing I will try to explain the best I can :) This is an example of where I got stuck I was experimenting around with QListView Basically just trying to print out data of what I have selected in the view I got stuck until Justin a very patient Python tutor showed me this bit of code ````listView clicked connect(B) def B(index): record = sqlmodel record(index row()) ```` It connects a clicked signal from QListView to function B I was very surprised that right away the clicked event sends `index` to B by itself I tried to look through QListView's documentation but cannot find anything that explains this <a href="http://www riverbankcomputing co uk/static/Docs/PyQt4/html/qlistview html" rel="nofollow">http://www riverbankcomputing co uk/static/Docs/PyQt4/html/qlistview html</a> In this case where in the docs should I look at to learn about this clicked event and the index it sends out? Would really appreciate any advise :) | The following code connects QTableView's clicked signal to your function QTableView emits clicked whenever someone clicks an item which means that your function will be called automatically since it is connected to that signal ````listView clicked connect(viewItemClicked) ```` Or am I missing something in your question? Read up on signal-slots in Qt if the above is unclear (PyQt allows any function (i e python callable) to be connected to a signal not just a slot (as it is in C++) |
scrap text by HTML class using BeautifulSoup return null I am trying to get all the comments information from this page(<a href="http://rads stackoverflow com/amzn/click/1449319246" rel="nofollow">http://www amazon com/Learning-Java-Patrick-Niemeyer/dp/1449319246%3FSubscriptionId%3DAKIAIZJQKUHUCXRLH6MQ%26tag%3Dyuplayit-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3D1449319246</a>) the text inside the tag`<div class=âdrkgryâ> </div>`but it always shows returns `[]` I donât know whatâs happening python: ````import bs4 from BeautifulSoup data = open("example_1 html") read() soup = BeautifulSoup(data) soup find_all("div" class="drkgry") ```` Iâve also tried `soup findall("div" class="drkgry") soup find_all('div' attrs ={'class':'drkgry'}) `but they just does not work The data source I want to scrap: ````</div> <div class="txtsmall mt4 fvavp"><span class="inlineblock formatVariation"><span class="gr3 gry formatKey">Format:</span><span class="formatValue">Paperback</span></span></div> <div class="mt9 reviewText"> <div class="drkgry"> Learning Java (Fourth Edition) is book for Java practitioner as reference book This covers lot of topics <br><br>This is an excellent book for someone who knows basics of programming This book is not beginners This book lacks examples and exercises which may disappoint few people <br><br>Book has 24 chapters covering almost all of basic Java The chapter one talks about historical aspects Second chapter is brief introduction of java but it assumes that reader is aware of programming OOP threading etc which is difficult for any beginner </div> </div> <div class="clearboth txtsmall gt9 vtStripe"> <div class="fl cmt"> ```` Does anyone help me solve the problem? | Use: ````class_="drkgry" ```` Instead of: ````class = "drkgry" ```` That is all i think |
Get the order of parameters for python function? I have the following function: ````def foo(a b c): print "Hello" ```` Let us say I know it exists and I know it takes three parameters named a b and c but I do not know in which order I want to be able to call foo given a dictionary like: ````args = { "a": 1 "b" : 17 "c": 23 } ```` Is there a way to find out in which order the parameters are to be passed? | You do not need to; let Python figure that out for you: ````foo(**args) ```` This applies your dictionary as keyword arguments which is perfectly legal You can use the arguments to `foo()` in any order when you use keyword arguments: ````>>> def foo(a b c): print a b c >>> foo(c=3 a=5 b=42) 5 42 3 >>> args = {'a': 1 'b' : 17 'c': 23} >>> foo(**args) 1 17 23 ```` You can still figure out the exact order using the <a href="http://docs python org/2/library/inspect html#inspect getargspec">`inspect getargspec()` function</a>: ````>>> import inspect >>> inspect getargspec(foo) ArgSpec(args=['a' 'b' 'c'] varargs=None keywords=None defaults=None) ```` But why have a dog and bark yourself? |
What made identifying House leadership difficult? | Internal party disunity |
Pandas sum rows by specific classes Here is my quesition: Take this dataframe(clipped from this <a href="http://stackoverflow com/questions/29414337/pandas-sum-up-rows-by-group">question</a>) for example: ```` date type 0 1 2 3 2003-01-01 unemp 1 733275e+09 2 067889e+09 3 279421e+09 3 223396e+09 2005-01-01 unemp 1 413758e+09 2 004171e+09 2 383106e+09 2 540857e+09 2007-01-01 unemp 1 287548e+09 1 462072e+09 2 831217e+09 3 528558e+09 2009-01-01 unemp 2 651480e+09 2 846055e+09 5 882084e+09 5 247459e+09 2011-01-01 unemp 2 257016e+09 4 121532e+09 4 961291e+09 5 330930e+09 2013-01-01 unemp 7 156784e+08 1 182770e+09 1 704251e+09 2 587171e+09 2003-01-01 emp 6 012397e+09 9 692455e+09 2 288822e+10 3 215460e+10 2005-01-01 emp 5 647393e+09 9 597211e+09 2 121828e+10 3 107219e+10 2007-01-01 emp 4 617047e+09 8 030113e+09 2 005203e+10 2 755665e+10 ```` My target: summarize the rows which has different type(unemp/emp) and make a new dataframe like this: <img src="http://i12 tietuku com/49ae1e801254f460 png" alt=""> | Use a <a href="http://pandas pydata org/pandas-docs/stable/groupby html" rel="nofollow">`groupby`-`sum`</a> : ```` >>> df groupby('type') sum() reset_index() type 0 1 2 3 0 emp 16276837000 27319779000 64158530000 90783440000 1 unemp 10058755400 13684489000 21041370000 22458371000 ```` |
Python: Searching a specific columns of a table for not matching items ````with open("test txt" "r") as test: reader = csv reader(test delimiter="\t") writer = csv writer(table delimiter="\t") for row in reader: for field in row: if field not in keywords: writer writerow(row) break ```` It seems that this code writes out every row multiple times I guess that it looks up every single field in each column How can I specify a single column? so this is the code I am using right now and it seems that it misses a few rows where the keyword is not present in any column ````table = open("table txt" "w") with open("test txt" "r") as test: reader = csv reader(test delimiter="\t") writer = csv writer(table delimiter="\t") for row in reader: if all(field not in keywords for field in row): writer writerow(row) ```` | You can use `zip` to get your columns then You can use a generator expression within `all` function for checking that all the elements mett the condition : ````with open("test txt" "r") as Spenn open("test txt" "r") as table: reader = zip(*csv reader(Spenn delimiter="\t")) writer = csv writer(table delimiter="\t") for row in reader: if all(field not in keywords for field in row): writer writerow(row) ```` But if you just want to write the rows that meet the condition you can use the following code : ````with open("test txt" "r") as Spenn open("test txt" "r") as table: reader = csv reader(Spenn delimiter="\t") writer = csv writer(table delimiter="\t") for row in reader: if all(field not in keywords for field in row): writer writerow(row) ```` |
Sort a PyTables table on two columns I want to sort a PyTables table Sorting on a single column is easy: I can just create a cs_index for the column I want to sort on and then use the Table itersorted() to get the rows in sorted order which I insert into a new table row by row) The problem is that I want to sort a table on <them>two</them> columns The table is in the following form: ````chr start end ------------------ chr1 1000 2000 chr1 1500 3000 chr2 1000 5000 chr2 1200 2000 ```` In this example the order is correct i e first it is sorted on 'chr' then on 'start' Is it possible to achieve this two-column sorting in an elegant way? P S I know that I can sort by extracting the columns and then sort the numpy arrays in-memory by using numpy lexsort but the data I am working may sometimes be too large (possibly billions of rows) | I do not think what you are looking for is implemented in Pytables So you probably have to do it yourself My advice would be: put a csi_index on both columns and when you need to treat the data in a sorted manner you need to implement the iteration yourself Pull small enough bits sorted by one column into memory and then work on that (sorting with respect to the other column and treating the data) Hope this helps |
Python signal: reading return from signal handler function Simple question; How do you read the return value of a function that is called as a signal handler? ````import signal def SigHand(sig frm): return 'SomeValue' signal signal(signal SIGCHLD SigHand) signal pause() ```` Is there a way to read the return value `'SomeValue'` other than setting it as a global? | You could create a simple class with a return value attribute ````>>> import signal >>> class SignalHandler(object): def __init__(self): self retval = None def handle(self sig frm): self retval = sig >>> s = SignalHandler() >>> s retval >>> signal signal(signal SIGALRM s handle) 0 >>> signal alarm(1) 0 >>> s retval 14 ```` |
csv module does not work in Google App Engine We are using Google App Engine with Python for our application I already wrote code that exports data to CSV using the csv module But when I try to read from the CSV: ````import csv users_csv_file = self request get("users_csv_file") csv_reader = csv reader(users_csv_file) ```` I get this exception: ````AttributeError: 'module' object has no attribute 'reader' ```` What is the problem and why cannot I import csv? | You have to pass in a file object not a file name ````import csv with open('eggs csv' 'rb') as csvfile: spamreader = csv reader(csvfile delimiter=' ' quotechar='|') for row in spamreader: print ' ' join(row) ```` <blockquote> Spam Spam Spam Spam Spam Baked Beans Spam Lovely Spam Wonderful Spam </blockquote> |
SOM based recommendation engine Myself and my friend has decided to do a project on recommendation engine in python Initially we decided to do our project using SVM but soon found its difficult as its supervised learning and now we are planning to use self organizing map and possible couple it with collaborative filtering(do not know if that is possible) to build the engine Would anyone suggest a good reference for self organizing map Also any alternate options apart from just using collaborative filtering Thanks a lot | I am not sure that a self-organizing map is actually the best for your application It may preserve the topological properties of your input space but it does not really fit well with a sparse data set which is a constant problem in recommendation engines I am not going to say that an SVM is any better in fact it is probably a lot further from what you actually want to do but a SOM will only be marginally better That said if you want to learn how to build a SOM in order of usefulness the following resources are worth looking at Also worth mentioning that a SOM is actually very close in theory to a convolutional neural net so any resources for those should carry over pretty well ````http://en wikipedia org/wiki/Self-organizing_map http://ftp it murdoch edu au/units/ICT219/Papers%20for%20transfer/papers%20on%20Clustering/Clustering%20SOM pdf http://www eicstes org/EICSTES_PDF/PAPERS/The%20Self-Organizing%20Map%20%28Kohonen%29 pdf http://www cs bham ac uk/~jxb/NN/l16 pdf http://www willamette edu/~gorr/classes/cs449/Unsupervised/SOM html ```` As far as approaches that would probably make more sense for your particular application I would probably suggest a Restricted Boltzmann Machine The idea with an RBM is that you would attempt to create a "recommendation profile" for each user based on various statistics about them defining a feature vector for the user This basic prediction would happen in a manner closely resembling a deep neural net Once your net is trained in one direction the real brilliance of an RBM is that you then run it backward You try to generate user profiles from recommendation profiles which works exceedingly well for applications like these For information on RBMs you can visit these links: ````http://deeplearning net/tutorial/rbm html http://www cs toronto edu/~hinton/absps/guideTR pdf http://www cs toronto edu/~hinton/absps/netflix pdf ```` Hinton is basically the authority on these and is also a total BAMF of data science The last link in the RBM list would actually be able to totally build your recommendation engine by itself but just in case you want to use more pre-built libraries or leverage other parts of data science I would highly suggest using some kind of dimensionality reduction mechanism before you try any collaborative filtering The biggest problem with collaborative filtering is that you usually have a very sparse matrix that does not quite give you the information you want and ends up holding onto a lot of stuff that is not really useful to you For that reason there are a series of algorithms in the field of topic modelling that will get you a lower dimensionality for you data that will then make collaborative filtering trivial or could be leveraged in any of the other approaches above to get more meaningful data with less intensity <a href="http://radimrehurek com/gensim/" rel="nofollow">gensim</a> is a python package that has a lot of topic modelling done for you and will also build out tfidf vectors for you utilizing numpy and scipy It is also very well documented The examples are however targeted towards more direct NLP Just keep in mind that the fact that their individual items happen to be words has no effect on the underlying algorithms and you can use it for less well-constrained systems If you want to go for gold in the topic modelling section you should really look into Pachinko Allocation (PA) which is a new algorithm in topic modelling that has more promise than most other topic modellers but does not come bundled in packages ````http://www bradblock com /Pachinko_Allocation_DAG_Structured_Mixture_Models_of_Topic_Correlations pdf ```` I wish you luck in your data science exploits! Let me know if you have any more questions and I can try to answer them |
No such column error message I read the tutorial from the django site and now I hit the wall It seems very simple but it is giving me a problem Everytime I run populate_rango py script I get an error I have modified my script and my models py file based on the example from Tango with Django site but still getting this not creating one of the column error : ```` Starting Rango population script Traceback (most recent call last): File "populate_rango py" line 67 in <module> populate() File "populate_rango py" line 8 in populate url="http://www google com") File "populate_rango py" line 55 in add_page p = Page objects get_or_create(category=cat title=title url=url likes=likes views=views)[0] File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site- packages/django/db/models/manager py" line 154 in get_or_create return self get_queryset() get_or_create(**kwargs) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/django/db/models/query py" line 376 in get_or_create return self get(**lookup) False File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/django/db/models/query py" line 304 in get num = len(clone) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/django/db/models/query py" line 77 in __len__ self _fetch_all() File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/django/db/models/query py" line 857 in _fetch_all self _result_cache = list(self iterator()) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/django/db/models/query py" line 220 in iterator for row in compiler results_iter(): File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/django/db/models/sql/compiler py" line 713 in results_iter for rows in self execute_sql(MULTI): File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/django/db/models/sql/compiler py" line 786 in execute_sql cursor execute(sql params) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site- packages/django/db/backends/util py" line 69 in execute return super(CursorDebugWrapper self) execute(sql params) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/django/db/backends/util py" line 53 in execute return self cursor execute(sql params) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/django/db/utils py" line 99 in __exit__ six reraise(dj_exc_type dj_exc_value traceback) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site- packages/django/db/backends/util py" line 53 in execute return self cursor execute(sql params) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site- packages/django/db/backends/sqlite3/base py" line 451 in execute return Database Cursor execute(self query params) django db utils OperationalError: no such column: rango_page likes ```` populate_rango py ````import os def populate(): search_cat = add_cat('Search Engins') add_page(cat=search_cat title="Google" url="http://www google com") add_page(cat=search_cat title="Yahoo !" url="http://www yahoo com") add_page(cat=search_cat title="Bing" url="http://www bing com") social_cat = add_cat("Social Media") add_page(cat=social_cat title="Facebook" url="http://www facebook com") add_page(cat=social_cat title="LinkedIn" url="http://www linkedin com") add_page(cat=social_cat title="Twitter" url="http://www twitter com/") news_cat = add_cat("News Sites") add_page(cat=news_cat title="CNN" url="http://www cnn com/") comme_cat = add_cat("Commerce") add_page(cat=comme_cat title="Amazon" url="http://www amazon com") add_page(cat=comme_cat title="eBay" url="http://www ebay com") for c in Category objects all(): for p in Page objects filter(category=c): print "- {0} - {1}" format(str(c) str(p)) def add_page(cat title url views=0 likes=0): p = Page objects get_or_create(category=cat title=title url=url likes=likes views=views)[0] return p def add_cat(name): c = Category objects get_or_create(name=name)[0] return c if __name__ == '__main__': print "Starting Rango population script " os environ setdefault('DJANGO_SETTINGS_MODULE' 'tangorango settings') from rango models import Category Page populate() ```` modes py: ````from django db import models class Category(models Model): name = models CharField(max_length=128 unique=True) def __unicode__(self): return self name class Page(models Model): category = models ForeignKey(Category) title = models CharField(max_length=128) url = models URLField() likes = models IntegerField(default=0) views = models IntegerField(default=0) def __unicode__(self): return self title ```` | run this command : ````python manage py sqlall <your_app> ```` see the result then if there was not `likes` table among them run this command and add it manually : ````python manage py dbshell ```` you can add it with this code: ````ALTER TABLE "Page" ADD COLUMN "likes" IntegerField(0) ```` <strong>but note that sync db does not make integrate schema changes once the tables are created You have to delete the database manually and do syncdb again </strong> for more information read <a href="https://docs djangoproject com/en/dev/ref/django-admin/#syncdb" rel="nofollow">THIS</a> recipe of `syncdb` in django site ! |
What is the minimum amount of people that can be involved in a bribe? | two |
Make a Tkinter Toplevel active I am trying to make a `Toplevel` widget that is the active window on the screen (I want it so that if you press <kbd>Enter</kbd> it exits the window I already have the key bound to the widget but I cannot seem to get the window to be the main window on my computer I am running my program using Notepad++ (I have a shortcut for this specific program since I will be using it a lot) Here is my code: ````def main(): root = Tk(className = ' Module Opener') app = GetFileName(root) root rowconfigure(0 weight = 1) root columnconfigure(0 weight = 1) root bind('<Return>' (lambda e b=app goButton: b invoke())) root mainloop() f pythonType = app fileName app pythonType if f[-3:] != ' py': f = ' py' moduleFile = getFilePath(f pythonType) if not moduleFile is None: subprocess call([r"C:\Program Files\Notepad++\notepad++ exe" moduleFile]) else: root withdraw() finalRoot = Toplevel(root) finalRoot grab_set() # I thought this would make it active finalApp = FileNotExist(finalRoot f) finalRoot rowconfigure(0 weight = 1) finalRoot columnconfigure(0 weight = 1) finalRoot bind('<Return>' (lambda e b=finalApp okButton: b invoke())) finalRoot mainloop() ```` I want it so that when it opens if I press <kbd>Enter</kbd> it does my command; however I have to click in the window first so that it becomes active and then it works I tried various things such as `finalRoot lift()` `finalRoot focus_set()` `finalRoot grab_set()`/`finalRoot grab_set_global()` (I saw these methods from <a href="http://stackoverflow com/questions/15944533/how-to-keep-the-window-focus-on-new-toplevel-window-in-tkinter">another question</a>) and `finalRoot focus()` The first window `Tk()` is active when the program starts However the `Toplevel()` is not I also tried making two `Tk()`'s (destroying `root` and then creating `finalRoot` as a new `Tk()` instance) but this did not work as well How can I do this? Thanks! | I had the same problem and tried everything I could find Unfortunately the answer is that it depends on your OS My window is automatically focused on my old Mac but not on OSX Lion Some of the commands you list are OS-dependant too |
What was a uniting issue in the party? | null |
"object of type 'NoneType' has no len()" when using pusher with Django I have implemented a very basic and by-the-tutorial Pusher triggers in my django app At first it worked great but after a few commits I ran into the following exception: ````"object of type 'NoneType' has no len()" ```` Out of the second of the following lines: ````p = pusher Pusher() p[page_key] trigger('page_update' {'msgid' : message id}) ```` Different or an empty dict produced same result same as changing page_key to string instead of unicode - nothing Also note that type() of p and p trigger gives logical results and they are definitely not None This line WORKS IN SOME CASES (which I have no clue what so special about them) and has worked in the past as I have mentioned yet I cannot figure out what am I doing wrong It seems that none of the last commits have anything to do with Pusher so I am helpless Searching the web for this exception was not fruitful at all and in general there is not enough documentation regarding to django Pusher It is probably something else that I am doing wrong but I have no clue where to start looking Any help will be most appreciated ````Traceback: 79 p[page_key] trigger('page_update' {'msgid' : message id}) File "/usr/local/lib/python2 7/dist-packages/pusher/__init__ py" in trigger 41 status = self send_request(self signed_query(event json_data socket_id) json_data) File "/usr/local/lib/python2 7/dist-packages/pusher/__init__ py" in signed_query 54 signature = hmac new(self pusher secret string_to_sign hashlib sha256) hexdigest() File "/usr/lib/python2 7/hmac py" in new 133 return HMAC(key message digestmod) File "/usr/lib/python2 7/hmac py" in __init__ 68 if len(key) > blocksize: Exception Type: TypeError at /sphere/comment Exception Value: object of type 'NoneType' has no len() ```` | self pusher secret is None there instead of a string Are your settings for pusher not getting picked up? |
Error in converting a user input into a float number ````magnitude = input("Enter the value of magnitude: ") for mag in magnitude: if float(mag) <= 1 9: print(mag1) elif float(mag) <= 2 9: print(mag2) elif float(mag) <= 3 9: print(mag3) ```` If I do any input with a decimal I get this error ````Traceback (most recent call last): line 36 in effects if float(mag) <= 1 9: ValueError: could not convert string to float: ' ' ```` I have tried converting the original input statement to a float before my loop but it just gives me a different error | The function `input` always gives you a string In this case you are trying to iterate over a string If you change this value to a float you will going to get a similar problem |
Convert scientific notation to decimals I have numbers in a file (so as strings) in scientific notation like: ````8 99284722486562e-02 ```` but I want to convert them to: ````0 08992847 ```` Is there any built-in function or any other way to do it? | I am pretty sure you can do this with: ````float("8 99284722486562e-02") # and now with 'rounding' "{: 8f} format(float("8 99284722486562e-02")) ```` |
How to make a model having two foreign Key relations with the different models I have made an app in which i have made three models as below: ````from django db import models class Class_Mgmt(models Model): class_name = models CharField(max_length=8 null=False blank=False) def __str__(self): return self class_name class Section_Mgmt(models Model): stu_class=models ForeignKey(Class_Mgmt) section_name=models CharField(max_length=1 null=False blank=False) def __str__(self): return self section_name class Teacher_Mgmt(models Model): teacher_name=models CharField(max_length=50 null=False blank=False) tea_class=models ForeignKey(Class_Mgmt) tea_section=models ForeignKey(Section_Mgmt) def __str__(self): return self teacher_name ```` Here `Section_Mgmt` class has a foreignKey relation with `Class_Mgmt` that means when i run the project and add a new `section_name` then `class_name` will be selected from a drop-down list of all the existing classes It is working well in the project But in the `Teacher_Mgmt` model i want to do like this: When i enter a new teacher in my form then when i select the existing class from the dropdown list then it will only show the sections available on the selected class because `section_Mgmt` model also has the foreign key relation with the `class_Mgmt` model At present when i am running the project and enter a new teacher and select a class from the dropdown showing all the existing classes then is showing all the sections instead of showing only those sections available in that class | Django does not have this functionality built in but you can use <a href="https://github com/digi604/django-smart-selects" rel="nofollow">django-smart-selects</a> Just install this library and change the Teacher_Mgmt to the following code: ````from smart_selects db_fields import ChainedForeignKey class Teacher_Mgmt(models Model): teacher_name=models CharField(max_length=50 null=False blank=False) tea_class=models ForeignKey(Class_Mgmt) tea_section=ChainedForeignKey( Section_Mgmt chained_field="tea_class" chained_model_field="tea_class" show_all=False auto_choose=True ) def __str__(self): return self teacher_name ```` |
Along with the surface of the Earth, what objects absorb radiation from the sun? | clouds |
What was the name of the law passed by Arnold Schwarzenegger? | "Donda West Law" |
Add python she will to program I try to make game in python using pygame It should be learning game so I want to ask if it is possible to run python she will in game window or I have to program my own parser? | `subprocess Popen('python')` will open a Python interpreter in a new window similar to typing 'python' on a command line But I suspect that this is not what you meant by 'in a game window' `cmd Cmd` creates "A simple framework for writing line-oriented command interpreters" (that use verb object syntax) It could be used for writing a text adventure game ('go east' 'open box' 'look room' etc) `code InteractiveInterpreter` is specifically for Python interpreters It uses `compile` and `exec` Idle's She Will is based on a subclass thereof defined in idlelib/PyShell py It simulates the Python interactive interpreter in a tkinter window There is no need to re-write the parser included in `compile`; one can tell compile to stop with the abstract syntax tree that is the output of the parser |
In Python access variable defined in main from another module I am writing a Python program in which the main script initializes several variables with common data from the command line config file etc I want the rest of the program to be able to access them as globals: ````def abc() : global xyz _someOptionValue = xyz['someOptionName'] ```` It does not work I think that is because the global is defined in `main py` (for example) and the function that references it is defined in `abc py` (for example) Consequently the function's global namespace is different from the main script's namespace I think I should be able to get at the global variable by qualifying its name with the name of the main script's namespace but what is that namespace's name? <strong><them>Context:</strong> I currently pass these variables as parameters and I know that many programmers consider that a cleaner solution In this case I do not because the variables are set once then are read throughout the whole program In my judgement this is the strongest possible case (and the only justified case) for using global variables If you think I should not do it I respect your opinion but I do not share it; please respect mine!</them> | Create a `globals` module that contains all the shared information and have every module (including main) import that module |
Change resolution of imshow in ipython I am using ipython with a code that looks like this: ````image = zeros(MAX_X MAX_Y) # do something complicated to get the pixel values # pixel values are now in [0 1] imshow(image) ```` However the resulting image always has the same resolution around (250x250) I thought that the image's dimensions would be (MAX_X x MAX_Y) but that is seemingly not the case How can I make ipython give me an image with a greater resolution? | The height and width of the displayed image on the screen is controlled by the <a href="http://matplotlib org/api/pyplot_api html#matplotlib pyplot figure" rel="nofollow">figure</a> size and the <a href="http://matplotlib org/api/pyplot_api html#matplotlib pyplot axes" rel="nofollow">axes</a> size ````figure(figsize = (10 10)) # creates a figure 10 inches by 10 inches ```` Axes ````axes([0 0 0 7 0 6]) # add an axes with the position and size specified by # [left bottom width height] in normalized units ```` Larger arrays of data will be displayed at the same size as smaller arrays but the number of individual elements will be greater so in that sense they do have higher resolution The resolution in dots per inch of a saved figure can be be controlled with the the dpi argument to <a href="http://matplotlib org/api/pyplot_api html#matplotlib pyplot savefig" rel="nofollow">savefig</a> Here is an example that might make it clearer: ````import matplotlib pyplot as plt import numpy as np fig1 = plt figure() # create a figure with the default size im1 = np random rand(5 5) ax1 = fig1 add_subplot(2 2 1) ax1 imshow(im1 interpolation='none') ax1 set_title('5 X 5') im2 = np random rand(100 100) ax2 = fig1 add_subplot(2 2 2) ax2 imshow(im2 interpolation='none') ax2 set_title('100 X 100') fig1 savefig('example png' dpi = 1000) # change the resolution of the saved image ```` <img src="http://i stack imgur com/Yk92Z png" alt="images of different sized arrays"> ````# change the figure size fig2 = plt figure(figsize = (5 5)) # create a 5 x 5 figure ax3 = fig2 add_subplot(111) ax3 imshow(im1 interpolation='none') ax3 set_title('larger figure') plt show() ```` <img src="http://i stack imgur com/317q3 png" alt="Larger sized figuer"> The size of the axes within a figure can be controlled in several ways I used <a href="http://matplotlib org/api/pyplot_api html#matplotlib pyplot subplot" rel="nofollow">subplot</a> above You can also directly add an axes with <a href="http://matplotlib org/api/pyplot_api html#matplotlib pyplot axes" rel="nofollow">axes</a> or with <a href="http://matplotlib org/users/gridspec html" rel="nofollow">gridspec</a> |
opening a file without an extension python I would like to open a file called "summary" and read information out of it to write into an output file however I cannot open "summary" I have tried to verify that the path exists - which works: ````import os path print os path exists('/Users/alli/Documents/Summer2016/sfit4_trial/summary') ```` This prints out true However when I try to do ````import os import glob path = '/Users/alli/Documents/Summer2016/sfit4_trial/summary' for infile in glob glob(os path join(path '*')): file = open(infile 'r') read() print file ```` Nothing happens I have looked through similar questions on SO and tried them all but not having any luck All suggestions welcome Thanks | Have you tried? ```` path = '/Users/alli/Documents/Summer2016/sfit4_trial' for infile in glob glob(os path join(path 'summary*')): ```` |
disable writing of character on qtextedit by overriding keypress event in pyqt4 python Is it possible to disable writing of characters on qtextedit by overriding keypress event in pyqt4 python? | Derive from `QTextEdit` and overwrite the corresponding event handler methods: ````class MyTextEdit(QTextEdit): def keyPressEvent(self event): event ignore() ```` |
executing an R script from python I have an R script that makes a couple of plots I would like to be able to execute this script from python I first tried: ````import subprocess subprocess call("/ /plottingfile R" she will=True) ```` This gives me the following error: ````/bin/sh: / /plottingfile R: Permission denied 126 ```` I do not know what the number 126 means All my files are on the Desktop and thus I do not think that any special permissions would be needed? I thought that this error may have had something to do with cwd = none but I changed this and I still had an error Next I tried the following: ````subprocess Popen(["ARE --vanilla --args </ /plottingfile R>"] she will = True) ```` But this too gave me an error with: ````/bin/sh: Syntax error: end of file unexpected ```` Most recently I tried: ````subprocess Popen("konsole | / /plottingfile R" she will = True) ```` This opened a new konsole window but no R script was ran Also I received the following error: ````/bin/sh: / /plottingfile R: Permission denied ```` Thanks | have you tried `chmod you+x /pathTo/Rscript R` ? |
Python 2 7 problems with sub-classing dict and mirroring to json So I am trying to create a class that behaves like a dict but also copies itself to a json file whenever a change is made to the dict I have got it working for the most part; but where I have trouble is when I append something to a list inside the dict; it updates the dict but not the json file associated with the dict I am sorry for the lengthy code block I tried to condense as much as possible but it still turned out fairly lengthy ````import json import os path class JDict(dict): def __init__(self filepath *args **kwargs): if str(filepath) split(' ')[-1] == 'json': self filepath = str(filepath) else: self filepath = str('{} json' format(filepath)) if os path isfile(self filepath): super(JDict self) __init__(self read()) else: super(JDict self) __init__(*args **kwargs) self write() def __setitem__(self key value): dict __setitem__(self key value) self write() def write(self): with open(self filepath 'w') as outfile: json dump(self outfile sort_keys = True indent = 4 ensure_ascii=False) def read(self): with open(self filepath 'r') as infile: jsonData = json load(infile) self = jsonData return self def parseJson(filepath): with open(filepath 'r') as infile: jsonData = json load(infile) return jsonData test = JDict("test json" { "TestList": [ "element1" ] }) test["TestList"] append("element2") try: if test["TestList"][1] == parseJson("test json")["TestList"][1]: print 'Success' except IndexError: print 'Failure' ```` | So the reason I was having trouble was because <strong>setitem</strong> is not called on the dict (or even on a list for that matter) when you append an element to a member list Soooo if anyone else is having this issue; I ended up sub-classing both list and dict datatypes and making them sort of helper classes to a new class called QJson all lists and dicts under the helper class QJson get converted to JDicts and JLists respectively and QJson is itself a dict The code is REALLY long and monolithic at this point so here is a link to my <a href="https://github com/deeredman1991/QJson/blob/master/QJson py" rel="nofollow">github</a> enjoy :) |
What is the fastest way to convert from a unixtime to a numpy datetime64? I suppose that the key here is to have the less number of intermediate conversions but I am not able to find a simple way in the new Numpy 2 0 dev | Actually `numpy datetime64` objects are basically unix times internally (with 6 extra significant digits to account for millisecond precision) You just need to multiply by `1e6` As an example: ````import numpy as np # Generate a few unix time stamps near today x = np arange(1326706251 1326706260) # Convert to datetimes x *= 1e6 x = x view(np datetime64) print x ```` This yields: ````[2012-01-16 09:30:51 2012-01-16 09:30:52 2012-01-16 09:30:53 2012-01-16 09:30:54 2012-01-16 09:30:55 2012-01-16 09:30:56 2012-01-16 09:30:57 2012-01-16 09:30:58 2012-01-16 09:30:59] ```` |
Raspberry Pi Python Program First Line Error/Python Library Error The first two lines of the program are: ````from i2clibraries import i2c_lcd from ABElectronics_ADCPi import ADCPi ```` No matter what line is first the Pi returns an error when I attempt to run it under Python or Python 3 All the libraries are possessed and registered Using the she will commands the checks saying the exports worked correctly all show up correctly However whatever line is line 1 will return a missing module error and the i2clibraries will always return a missing module error By keeping that as the first line I get the least errors in running but the program still returns this: ````pi@raspberrypi ~ $ sudo python file py Traceback (most recent call last): File "file py" line 1 in <module> from i2clibraries import i2c_lcd File "/home/pi/i2clibraries/i2c_lcd py" line 1 in <module> from i2clibraries import i2c File "/home/pi/i2clibraries/i2c py" line 1 in <module> from quick2wire i2c import I2CMaster writing_bytes reading ImportError: No module named quick2wire i2c ```` Given the error what possible solutions are there to stop the first line from being unable to find its module? | <h3>Problem</h3> The error message is telling you that when you try to import the `i2clibraries` module the imports that it requires (dependencies) cannot be found when it is itself is being imported This is specifically in the first line of `i2c py` file - where the line ````from quick2wire i2c import I2CMaster writing_bytes reading ```` is failing The problem is almost certainly that your modules are not on the <a href="https://docs python org/2/tutorial/modules html#the-module-search-path" rel="nofollow">Python module search path</a> Further info on this is given at the end of this answer should you need it <h3>Solution</h3> There are a number of ways to resolve this The one <a href="https://github com/quick2wire/quick2wire-python-api#installation" rel="nofollow">recommended by the developers of the module</a> is <blockquote> To use the library without installation add the full path of the source tree to the PYTHONPATH environment variable For example: ````export QUICK2WIRE_API_HOME=[the directory cloned from Git or unpacked from the source archive] export PYTHONPATH=$PYTHONPATH:$QUICK2WIRE_API_HOME ```` </blockquote> So you need to know where your `quick2wire` libraries are installed - from your error message I would hazard a guess that they are in `/home/pi/i2clibraries/` so `$QUICK2WIRE_API_HOME=/home/pi/i2clibraries/` should be your first line of the above pair <h3>Further info</h3> You can read more generally about how to install modules on Python 2 x <a href="https://docs python org/2/install/" rel="nofollow">on the Python website</a> You can look at what paths make up the module search path by going to an interactive Python prompt (i e typing `python`) and then doing ````>>> import sys >>> sys path ```` This will output a list containing strings representing all the paths that will be searched for modules |
How to effectively (inplace) multiply two views of memmaped numpy arrays of different sizes Imagine I have ````a = np memmap( ) b = np memmap( ) ```` I would like to get element wise result and a updated ````a = a[0:size1:2] * b[1:size1:3] ```` | Assuming `a[0:size1:2]` and `b[1:size1:3]` are the same dimension (or at least broadcastable) you can use the fact that slices of numpy arrays share memory: ````temp_a = a[0:size1:2] temp_a *= b[1:size1:3] ```` This will update only the values of `a` that are in `temp_a` |
Position in Python Lists I am trying to write a code for operation between each element with all the other elements of a list but not with itself Below is the code ````list = [10 20 30 30 40 50 50 50 60 70]; for i in list: sum=i; for j in list: if list index(i) != list index(j): s=(50-((j-i)/2))*0 13; sum+=s; print("score of %d is %f"%(i sum)); ```` But still the code is not working It is not satisfying the `if` condition of positions | You can use `enumerate` to get the index This may do what you up to: ````lis = [10 20 30 30 40 50 50 50 60 70] for idx1 el1 in enumerate(lis): sum_ = el1 for idx2 el2 in enumerate(lis): if idx1 != idx2: sum_ = (50 - (el2 - el1)/2) * 0 13 print("score of %d is %f"%(idx1 sum_)) ```` |
Execute python script from Assembly I am planning on making an OS from scratch using <strong>Python</strong> However I only know how to make it by writing in <strong>Assembly </strong> <hr> Is it possible for me to still <strong>write the kernel in Assembly </strong> convert it to a binary and during boot execute the Python script? <them>I hope this made sense</them> | I think you would be interested in this project: <a href="https://github com/Maratyszcza/PeachPy" rel="nofollow">https://github com/Maratyszcza/PeachPy</a> A comment from LtU: <blockquote> PeachPy is a Python framework for writing high-performance assembly kernels PeachPy aims to simplify writing optimized assembly kernels while preserving all optimization opportunities of traditional assembly You can use the same code to generate assembly for Windows Unix and Golang assembly The library handles the various ABIs automatically I have not seen >this cool project before Among the cool features is the ability to invoke the generated assembly as regular Python functions Nice </blockquote> |
Python - Using csv and xlrd module to write multi-row excel file to one row csv file I have got a basic script that will take a source excel ( xlsx) file and write the data to a matching csv file in python My ultimate goal is take all of the data in a sheet and write it one long comma seperated row and I am not sure how to accomplish that based on what I have so far ````def csv_from_excel(): import csv import xlrd wb1 = raw_input('What is the path and file name of your Workbook? ') sh = raw_input('What is the name of the sheet being transformed? ') csv_file1 = raw_input('What is the file path and name of the output? ') print wb1 print sh print csv_file1 wb = xlrd open_workbook(wb1) sh1 = wb sheet_by_name(sh) csv_file = open(csv_file1 'wb') wr = csv writer(csv_file quoting=csv QUOTE_MINIMAL) for rownum in xrange(sh1 nrows): wr writerow(sh1 row_values(rownum)) csv_file close() print "Completed converting %s and %s to csv" % (wb1 sh) csv_from_excel() ```` | If I understand correctly you want to take a multi-row XLS and output that as a single-row CSV If so this is causing you to output multiple CSV rows: ````for rownum in xrange(sh1 nrows): wr writerow(sh1 row_values(rownum)) ```` That code steps through each row in your XLS and creates a corresponding row in your CSV Since you only want one CSV row you should probably accumulate the XLS rows into a single collection before writing it all out in a single step: ````output = list() for rownum in xrange(sh1 nrows): output extend(sh1 row_values(rownum)) wr writerow(output) ```` |
efficient collection lookup and index creation in mongodb I have a very high volume of data coming in to my database up to 1 000 documents per second And when I receive each one I am trying to efficiently check if there is already a collection for that type of document based on a field in the document and then if not I want to create a few indexes on that collection Basically I am wondering how long it takes to create an index on an empty collection and if there is any faster way to check if there is a collection with a specified name ````def insert_new_doc(json_doc): collection_name = json_doc["collection_name"] coll = tlm_db_connection[collection_name] # create indexes in background if the collection does not exist if tlm_db_connection system namespaces find( { name : collection_name } ) == None: coll ensure_index([('time_stamp' pymongo DESCENDING)] background = True) coll ensure_index([('raw_value' pymongo DESCENDING)] background = True) coll ensure_index([('time_stamp' pymongo DESCENDING) ('raw_value' pymongo DESCENDING)] background = True) coll insert(json_doc) ```` So that is the function I have If I have ensure_index set to background = True any idea how long it blocks this thread calling that function? | Creating new index on empty collection is quite fast you can measure it by running following test ````function howLong(){ var t0 = new Date() valueOf(); db myCollection ensureIndex({name: 1}); var t1 = new Date() valueOf(); return t1 - t0; } ```` EnsureIndex will block until index is created On my old laptop is says 0 :) The same technique might be used to get approx "background" creation time of an index in mongoshell <blockquote> Background indexing operations run in the background so that other database operations can run while creating the index However the <strong>mongo she will session or connection</strong> where you are creating the index <strong>will block</strong> until the index build is complete </blockquote> <a href="http://docs mongodb org/manual/core/index-creation/#behavior" rel="nofollow">http://docs mongodb org/manual/core/index-creation/#behavior</a> If you call ensureIndex early enough it will be quick i e indexing 100 000 items on my machine (indexing by name of users collection) takes approx 350ms Subsequent calls to ensureIndex (after it has been created ) will exit straightaway (with appropriate message) but I would not do it if i could (i e the database is controlled by me and not shared with others) I would do the dedicated thread for index creation Since your collection will grow quite fast and you will create an index make sure it fits into RAM <a href="http://docs mongodb org/manual/tutorial/ensure-indexes-fit-ram/" rel="nofollow">see here</a> so it might be worth to pre-aggregate the data while inserting Regarding checking the existence of collection assuming your application is the only one writing to the db you can list all the collection at the start-up and keep this information in memory There is interesting project from 10gen-Labs that seems to addresses similar problems (java code though) Might be worth having look <a href="https://github com/10gen-labs/hvdf" rel="nofollow">High Volume Data Feed</a> |
Python-Wikipedia Automated Downloader [Using Python 3 1] Does anyone have any idea how to make a Python 3 application allow the user to write a text file with multiple words separated with commas The program should read the file and download the Wikipedia page of the requested item e g if they typed hello python-3 chicken it would go to Wikipedia and download <a href="http://www wikipedia com/wiki/hello" rel="nofollow">http://www wikipedia com/wiki/hello</a> <a href="http://www wikip" rel="nofollow">http://www wikip</a> Anyone think they can do this? When I say "download" I mean download the text does not matter about images | You described exactly how to make such a program So what is the question? You read the file split on commas and download the URL Done! |
Celery error : result get times out I have installed Celery and I am trying to test it with the <a href="http://docs celeryproject org/en/latest/getting-started/first-steps-with-celery html" rel="nofollow">Celery First Steps Doc</a> I tried using both Redis and RabbitMQ as brokers and backends but I cannot get the result with : ````result get(timeout = 10) ```` Each time I get this error : ```` Traceback (most recent call last): File "<input>" line 11 in <module> File "/home/mehdi/ virtualenvs/python3/lib/python3 4/site-packages/celery/result py" line 169 in get no_ack=no_ack File "/home/mehdi/ virtualenvs/python3/lib/python3 4/site-packages/celery/backends/base py" line 225 in wait_for raise TimeoutError('The operation timed out ') celery exceptions TimeoutError: The operation timed out ```` The broker part seems to work just fine : when I run this code ````from celery import Celery app = Celery('tasks' backend='redis://localhost/' broker='amqp://') @app task def add(x y): return x y result = add delay(4 4) ```` I get (as expected) <blockquote> [2015-08-04 12:05:44 910: INFO/MainProcess] Received task: tasks add[741160b8-cb7b-4e63-93c3-f5e43f8f8a02] [2015-08-04 12:05:44 911: INFO/MainProcess] Task tasks add[741160b8-cb7b-4e63-93c3-f5e43f8f8a02] succeeded in 0 0004287530000510742s: 8 </blockquote> P S : I am using Xubuntu 64bit EDIT : My app conf ````{'CELERY_RESULT_DB_TABLENAMES': None 'BROKER_TRANSPORT_OPTIONS': {} 'BROKER_USE_SSL': False 'CELERY_BROADCAST_QUEUE': 'celeryctl' 'EMAIL_USE_TLS': False 'CELERY_STORE_ERRORS_EVEN_IF_IGNORED': False 'CELERY_CREATE_MISSING_QUEUES': True 'CELERY_DEFAULT_QUEUE': 'celery' 'CELERY_SEND_TASK_SENT_EVENT': False 'CELERYD_TASK_TIME_LIMIT': None 'BROKER_URL': 'amqp://' 'CELERY_EVENT_QUEUE_EXPIRES': None 'CELERY_DEFAULT_EXCHANGE_TYPE': 'direct' 'CELERYBEAT_SCHEDULER': 'celery beat:PersistentScheduler' 'CELERY_MAX_CACHED_RESULTS': 100 'CELERY_RESULT_PERSISTENT': None 'CELERYD_POOL': 'prefork' 'CELERYD_AGENT': None 'EMAIL_HOST': 'localhost' 'CELERY_CACHE_BACKEND_OPTIONS': {} 'BROKER_HEARTBEAT': None 'CELERY_RESULT_ENGINE_OPTIONS': None 'CELERY_RESULT_SERIALIZER': 'pickle' 'CELERYBEAT_SCHEDULE_FILENAME': 'celerybeat-schedule' 'CELERY_REDIRECT_STDOUTS_LEVEL': 'WARNING' 'CELERY_IMPORTS': () 'SERVER_EMAIL': 'celery@localhost' 'CELERYD_TASK_LOG_FORMAT': '[%(asctime)s: %(levelname)s/%(processName)s] %(task_name)s[%(task_id)s]: %(message)s' 'CELERY_SECURITY_CERTIFICATE': None 'CELERYD_LOG_COLOR': None 'CELERY_RESULT_EXCHANGE': 'celeryresults' 'CELERY_TRACK_STARTED': False 'CELERY_REDIS_PASSWORD': None 'BROKER_USER': None 'CELERY_COUCHBASE_BACKEND_SETTINGS': None 'CELERY_RESULT_EXCHANGE_TYPE': 'direct' 'CELERY_REDIS_DB': None 'CELERYD_TIMER_PRECISION': 1 0 'CELERY_REDIS_PORT': None 'BROKER_TRANSPORT': None 'CELERYMON_LOG_FILE': None 'CELERYD_CONCURRENCY': 0 'CELERYD_HIJACK_ROOT_LOGGER': True 'BROKER_VHOST': None 'CELERY_DEFAULT_EXCHANGE': 'celery' 'CELERY_DEFAULT_ROUTING_KEY': 'celery' 'CELERY_ALWAYS_EAGER': False 'EMAIL_TIMEOUT': 2 'CELERYD_TASK_SOFT_TIME_LIMIT': None 'CELERY_WORKER_DIRECT': False 'CELERY_REDIS_HOST': None 'CELERY_QUEUE_HA_POLICY': None 'BROKER_PORT': None 'CELERYD_AUTORELOADER': 'celery worker autoreload:Autoreloader' 'BROKER_CONNECTION_TIMEOUT': 4 'CELERY_ENABLE_REMOTE_CONTROL': True 'CELERY_RESULT_DB_SHORT_LIVED_SESSIONS': False 'CELERY_EVENT_SERIALIZER': 'json' 'CASSANDRA_DETAILED_MODE': False 'CELERY_REDIS_MAX_CONNECTIONS': None 'CELERY_CACHE_BACKEND': None 'CELERYD_PREFETCH_MULTIPLIER': 4 'BROKER_PASSWORD': None 'CELERY_BROADCAST_EXCHANGE_TYPE': 'fanout' 'CELERY_EAGER_PROPAGATES_EXCEPTIONS': False 'CELERY_IGNORE_RESULT': False 'CASSANDRA_KEYSPACE': None 'EMAIL_HOST_PASSWORD': None 'CELERYMON_LOG_LEVEL': 'INFO' 'CELERY_DISABLE_RATE_LIMITS': False 'CELERY_TASK_PUBLISH_RETRY_POLICY': {'interval_start': 0 'interval_max': 1 'max_retries': 3 'interval_step': 0 2} 'CELERY_SECURITY_KEY': None 'CELERY_MONGODB_BACKEND_SETTINGS': None 'CELERY_DEFAULT_RATE_LIMIT': None 'CELERYBEAT_SYNC_EVERY': 0 'CELERY_EVENT_QUEUE_TTL': None 'CELERYD_POOL_PUTLOCKS': True 'CELERY_TASK_SERIALIZER': 'pickle' 'CELERYD_WORKER_LOST_WAIT': 10 0 'CASSANDRA_SERVERS': None 'CELERYD_POOL_RESTARTS': False 'CELERY_TASK_PUBLISH_RETRY': True 'CELERY_ENABLE_UTC': True 'CELERY_SEND_EVENTS': False 'BROKER_CONNECTION_MAX_RETRIES': 100 'CELERYD_LOG_FILE': None 'CELERYD_FORCE_EXECV': False 'CELERY_CHORD_PROPAGATES': True 'CELERYD_AUTOSCALER': 'celery worker autoscale:Autoscaler' 'CELERYD_STATE_DB': None 'CELERY_ROUTES': None 'CELERYD_TIMER': None 'ADMINS': () 'BROKER_HEARTBEAT_CHECKRATE': 3 0 'CELERY_ACCEPT_CONTENT': ['json' 'pickle' 'msgpack' 'yaml'] 'BROKER_LOGIN_METHOD': None 'BROKER_CONNECTION_RETRY': True 'CELERY_TIMEZONE': None 'CASSANDRA_WRITE_CONSISTENCY': None 'CELERYBEAT_MAX_LOOP_INTERVAL': 0 'CELERYD_LOG_LEVEL': 'WARN' 'CELERY_REDIRECT_STDOUTS': True 'BROKER_POOL_LIMIT': 10 'CELERY_SECURITY_CERT_STORE': None 'CELERYD_CONSUMER': 'celery worker consumer:Consumer' 'CELERY_INCLUDE': () 'CELERYD_MAX_TASKS_PER_CHILD': None 'CELERYD_LOG_FORMAT': '[%(asctime)s: %(levelname)s/%(processName)s] %(message)s' 'CELERY_ANNOTATIONS': None 'CELERY_MESSAGE_COMPRESSION': None 'CASSANDRA_READ_CONSISTENCY': None 'EMAIL_USE_SSL': False 'CELERY_SEND_TASK_ERROR_EMAILS': False 'CELERY_QUEUES': None 'CELERY_ACKS_LATE': False 'CELERYMON_LOG_FORMAT': '[%(asctime)s: %(levelname)s] %(message)s' 'CELERY_TASK_RESULT_EXPIRES': datetime timedelta(1) 'BROKER_HOST': None 'EMAIL_PORT': 25 'BROKER_FAILOVER_STRATEGY': None 'CELERY_RESULT_BACKEND': 'rpc://' 'CELERY_BROADCAST_EXCHANGE': 'celeryctl' 'CELERYBEAT_LOG_FILE': None 'CELERYBEAT_SCHEDULE': {} 'CELERY_RESULT_DBURI': None 'CELERY_DEFAULT_DELIVERY_MODE': 2 'CELERYBEAT_LOG_LEVEL': 'INFO' 'CASSANDRA_COLUMN_FAMILY': None 'EMAIL_HOST_USER': None} ```` | Using `app backend get_result(result id)` to instead of `AsyncResult get()` since `AsyncResult get()` will block until the task status become ready however the task has already run completed |
What is NOT protected? | The idea itself |
scipy sparse csc_matrix format for mlpy I was wondering if there is any way to have a `scipy sparse csc_matrix` format for `mlpy` in `python` I have worked with mlpy before and have always dealt with non sparse matrices For instance if I have 5 features and 1 label (0 or 1) for each row I would have something like this: ````2 3 4 5 6 0 1 2 3 4 5 1 ```` Now for my next project I have a huge number of features around 20 000 so creating a sparse matrix in this case would be much easier I looked at mlpy documentation for k-means clustering (since all I have to do now is to cluster data) and it <a href="http://mlpy sourceforge net/docs/3 5/cluster html#mlpy kmeans" rel="nofollow">says</a>: ````Parameters : x : 2d array_like object (N P) data k : int (1<k<N) number of clusters plus : bool k-means++ algorithm for initialization seed : int random seed for initialization Returns : clusters means steps: 1d array 2d array int cluster membership in 0 K-1 means (K P) number of steps ```` I think by this they mean that mlpy accepts only non-sparse matrices If I am reading something wrong please let me know Any help would be highly appreciated Thanks! | I think that the answer is simply that the kmeans in MLPy does not work with sparse inputs It is non-trivial to code an algorithm to work on sparse inputs The <a href="http://scikit-learn org/stable/modules/clustering html#mini-batch-k-means" rel="nofollow">MiniBatchKMeans</a> of <a href="http://scikit-learn org/stable/index html" rel="nofollow">scikit-learn</a> works on sparse input (disclaimer: I am a scikit-learn developer) |
Pygame responds incorrectly to button clicks I am having an issue with pygame I have set up a window that randomly places circles across the screen very quickly just for testing purposes There are also three buttons: play/pause (switches back and forth stops circles from appearing) and an increase speed and decrease speed button I am not very experienced with python or pygame but I have come up with this function to create a clickable button on the screen: ````def makeButton(rect color hovercolor text textsize textcolor): clicked = False for event in pygame event get(): if event type == pygame MOUSEBUTTONDOWN: clicked = True mouse = pygame mouse get_pos() rect = pygame Rect(rect) displaycolor = color if rect collidepoint(mouse): displaycolor = hovercolor buttonSurface = pygame draw rect(gameDisplay displaycolor rect 0) font = pygame font Font('freesansbold ttf' textsize) TextSurf = font render(text True textcolor) TextRect = TextSurf get_rect() TextRect center = rect center gameDisplay blit(TextSurf TextRect) if clicked: return True else: return False ```` This function can definitely be shortened and simplified but it has worked for me up until now I took out a big chunk of code that I realized was useless (having a completely different block of code to render the button when hovered instead of just changing the display color) Now whenever I click any of the three previously-mentioned buttons it seems to pick a random one and return True messing up the rest of the program For example the play button will increase the speed one time pressing decrease speed will pause etc Sometimes it does do what it is supposed to but it seems to be random Some extra info if it is useful: -This function is called three times every tick It is inside a loop and if it returns true its corresponding actions are supposed to be performed (pause or play the game increase/decrease speed) -The play/pause button is one button that toggles between green with an 'play' arrow and red with a pause symbol They are two separate buttons and functions and only one of them is executed at a time -I have almost zero experience with classes so they may be way better at handling this situation -The only explanation I can think of for this problem is that the returned booleans are getting mixed up between the different places this function is used I am pretty sure the problem is within this chunk of code but ask me and I will post the places it is called too | "pygame event get()" takes one event at a time and *clears it** from the list of events that need to be processed So more specifically pygame event get() returns each event only <strong>once</strong> Take a look at the following code: ````clicked = False for event in pygame event get(): if event type == pygame MOUSEBUTTONDOWN: clicked = True ```` After this is called all of the events are removed Here is an analysis of the code Assume that there are currently two events that have not been processed the first being a key pressed down and the other being a mouse button that is been pressed down - The first event event KEYDOWN is put into the variable <them>"event"</them> - The program checks whether "event" (currently equal to event KEYDOWN) is equal to event MOUSEBUTTONDOWN They are obviously not the same thing so the next line is skipped - The second event event MOUSEBUTTONDOWN is put into variable <them>"event"</them> <strong>This removes what was previously in the variable "event" removing the first event from existence </strong> - The program checks whether "event" (currently equal to event MOUSEBUTTONDOWN) is equal to event MOUSEBUTTONDOWN It is so it proceeds to the next line - "clicked" is set to True and the for loop exits because there are no event remaining You should now have a better understanding of how Pygame processes events There are also many problems with the function you gave (makeButton) You should find a python tutorial to learn the rest I suggest a book called "Hello World" by Carter and Warren Sande The book is kind of out of date (teaches Python 2 5) but its code still works with Python 2 7 and it is one of the few decent Python books I have been able to find I have included the code to do what you are trying to do I do not use Rect objects but if you want them you can change the code to include them I also did not include the text because I am short on time Instead of placing random circles this prints text (to the she will) when buttons are clicked ````import pygame sys pygame init() screen = pygame display set_mode([640 480]) clock = pygame time Clock() buttons = [] #buttons = [[rect color hovercolor hovering clicked message]] def makeButton(rect color hovercolor text): global buttons buttons append([rect color hovercolor False False text]) makeButton([0 0 50 50] [0 127 0] [0 255 0] "Clicked Green") makeButton([50 0 50 50] [190 190 0] [255 255 0] "Clicked Yellow") makeButton([100 0 50 50] [0 0 127] [0 0 255] "Clicked Blue") while 1: clock tick(60) for event in pygame event get(): if event type == pygame MOUSEMOTION: mousepos = event pos for a in range(len(buttons)): if mousepos[0] >= buttons[a][0][0] and mousepos[0] <= buttons[a][0][0]+buttons[a][0][2] and mousepos[1] >= buttons[a][0][1] and mousepos[1] <= buttons[a][0][1]+buttons[a][0][3]: buttons[3] = True else: buttons[3] = False if event type == pygame MOUSEBUTTONDOWN: mousepos = event pos for a in range(len(buttons)): if mousepos[0] >= buttons[a][0][0] and mousepos[0] <= buttons[a][0][0]+buttons[a][0][2] and mousepos[1] >= buttons[a][0][1] and mousepos[1] <= buttons[a][0][1]+buttons[a][0][3]: buttons[4] = True else: buttons[4] = False for a in range(len(buttons)): if buttons[3] == 0: pygame draw rect(screen buttons[1] buttons[0]) else: pygame draw rect(screen buttons[2] buttons[0]) if buttons[4] == 1: buttons[4] = 0 print buttons[5] pygame display flip() ```` I have not had the opportunity to test out the code I just typed (using school computer) but it should work If there are any problems with the code just leave a comment and I will fix it Also leave a comment if you do not understand something Do not give up you can do it! |
Who succeeded Antimachus II? | Menander I |
How many aircrafts were operated commercially that were in compliance with FAA safety rules? | null |
R CMD BATCH script ARE execution via fabric run() never exits I have written a fabric script with boto to install a R application on AWS instance Fedora 23 All the commands using run & sudo function go as expected except this one: ````@parallel def install_DvD(): # with settings(hide('warnings' 'running' 'stdout' 'stderr') warn_only=True): cmd0 = 'R CMD BATCH %s/DvDdependencies R' % (DvDpackage_location) run(cmd0) ```` As you would noticed I tried using 'warn_only=true' and that did not help The installation completes successfully with out errors I check that manually by logging into the instance and eyeballing <them>DvDdependencies Rout</them> file I think for reasons unkonwn to me the R CMD BATCH command does not return the execution back to fabric The traceback output from Ctrl^c the fabric process on my local system is: ````[ec2-54-172-154-181 compute-1 amazonaws com] run: R CMD BATCH ~/DvDdependencies ARE [ec2-54-165-109-62 compute-1 amazonaws com] run: R CMD BATCH ~/DvDdependencies ARE ^C Stopped !!! Parallel execution exception under host you'ec2-54-165-109-62 compute-1 amazonaws com': !!! Parallel execution exception under host you'ec2-54-172-154-181 compute-1 amazonaws com': Process ec2-54-172-154-181 compute-1 amazonaws com: Traceback (most recent call last): File "/usr/lib64/python2 7/multiprocessing/process py" line 258 in _bootstrap self run() File "/usr/lib64/python2 7/multiprocessing/process py" line 114 in run Process ec2-54-165-109-62 compute-1 amazonaws com: self _target(*self _args **self _kwargs) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/tasks py" line 242 in inner Traceback (most recent call last): File "/usr/lib64/python2 7/multiprocessing/process py" line 258 in _bootstrap submit(task run(*args **kwargs)) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/tasks py" line 174 in run return self wrapped(*args **kwargs) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/decorators py" line 181 in inner self run() File "/usr/lib64/python2 7/multiprocessing/process py" line 114 in run self _target(*self _args **self _kwargs) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/tasks py" line 242 in inner return func(*args **kwargs) File "/home/eyebell/local_bin/healX/DvD-installation/fabfile py" line 70 in install_DvD run(cmd0) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/network py" line 649 in host_prompting_wrapper submit(task run(*args **kwargs)) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/tasks py" line 174 in run return self wrapped(*args **kwargs) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/decorators py" line 181 in inner return func(*args **kwargs) return func(*args **kwargs) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/operations py" line 1056 in run File "/home/eyebell/local_bin/healX/DvD-installation/fabfile py" line 70 in install_DvD run(cmd0) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/network py" line 649 in host_prompting_wrapper return func(*args **kwargs) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/operations py" line 1056 in run shell_escape=shell_escape) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/operations py" line 925 in _run_command stderr=stderr timeout=timeout) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/operations py" line 811 in _execute time sleep(ssh io_sleep) KeyboardInterrupt shell_escape=shell_escape) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/operations py" line 925 in _run_command stderr=stderr timeout=timeout) File "/home/eyebell/local_bin/healX/DvD-installation/py2fabvirt/lib/python2 7/site-packages/fabric/operations py" line 811 in _execute time sleep(ssh io_sleep) KeyboardInterrupt ```` The complete script is here in my <a href="https://github com/gprakhar/DvD-installation" rel="nofollow">github repo</a> the location of <a href="http://www ebi ac uk/saezrodriguez/DVD/data/DvDdependencies R" rel="nofollow">DvDdependencies R</a> (the script that I want to install) Any comments help or pointers in right direction are appreciated | I am closing this as the issue is with this script only Also the installation method for DvD tool has been updated hence this is not of interest anymore |
what is the dict class used for Can someone explain what the `dict` class is used for? This snippet is from <them>Dive Into Python</them> ````class FileInfo(dict): "store file metadata" def __init__(self filename=None): self["name"] = filename ```` I understand the assignment of key=value pairs with `self['name'] = filename` but what does inheriting the `dict` class have to do with this? Please help me understand | If you are not familiar with inheritance concept of object-oriented programming have a look at least at <a href="http://en wikipedia org/wiki/Inheritance_%28computer_science%29" rel="nofollow">this</a> wiki article (though that is only for introduction and may be not for the best one) In python we use this syntax to define class `A` as subclass of class `B`: ````class A(B): pass # empty class ```` In your example as `FileInfo` class is inherited from standard <a href="http://docs python org/library/stdtypes html#dict" rel="nofollow">`dict`</a> type you can use instances of that class as dictionaries (as they have all methods that regular `dict` object has) Besides other things that allows you assign values by key like that (`dict` provides method for handing this operation): ````self['name'] = filename ```` Is that the explanation you want or you do not understand something else? |
How to add a bootstrap script to tox's virtualenv? I need psycopg2 and lxml for my tests but when I try to install it in a virtualenv through tox it fails due to the missing pg_conf or other dependencies I found this explanation of bootstrap scripts: <a href="http://www virtualenv org/en/latest/index html#bootstrap-example" rel="nofollow">http://www virtualenv org/en/latest/index html#bootstrap-example</a> How can I add a bootstrap script to tox's virtualenv? Do you know any good examples for my concerns (lxml and psycopg2)? | I donât think you can use bootstrap scripts (as described in the virtualenv docs) with tox However you can configure your `tox ini` file to install Python dependencies that are not specified in `setup py` and run arbitrary commands before running tests From the tox home page: ````# content of: tox ini put in same dir as setup py [tox] envlist = py26 py27 [testenv] deps=pytest # install pytest in the venvs commands=py test # or 'nosetests' or ```` `deps` and `commands` are actually lists: ````deps= lxml psycopg2 pytest commands= /some_other_script sh py test ```` But forget about bootstrap scripts and take a step back What is the original problem with pg_conf? |
requests exceptions SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl c:590) I have a script that is made in python as below ````#!/bin/env python2 7 # Run around 1059 as early as 1055 # Polling times vary pick something nice # Ghost checkout timer can be changed by # adjusting for loop range near bottom # Fill out personal data in checkout payload dict import sys json time requests urllib2 from datetime import datetime qty='1' def UTCtoEST(): current=datetime now() return str(current) ' EST' print poll=raw_input("Polling interval? ") poll=int(poll) keyword=raw_input("Product name? ") title() # hardwire here by declaring keyword as a string color=raw_input("Color? ") title() # hardwire here by declaring keyword as a string sz=raw_input("Size? ") title() # hardwire here by declaring keyword as a string print print UTCtoEST() ':: Parsing page ' def main(): global ID global variant global cw req = urllib2 Request('http://www supremenewyork com/mobile_stock json') req add_header('User-Agent' "User-Agent' 'Mozilla/5 0 (iPhone; CPU iPhone OS 6_1_4 like Mac OS X) AppleWebKit/536 26 (KHTML like Gecko) Version/6 0 Mobile/10B350 Safari/8536 25") resp = urllib2 urlopen(req) data = json loads(resp read()) ID=0 for i in range(len(data[you'products_and_categories'] values())): for j in range(len(data[you'products_and_categories'] values()[i])): item=data[you'products_and_categories'] values()[i][j] name=str(item[you'name'] encode('ascii' 'ignore')) # SEARCH WORDS HERE # if string1 in name or string2 in name: if keyword in name: # match/(es) detected! # can return multiple matches but you are # probably buying for resell so it does not matter myproduct=name ID=str(item[you'id']) print UTCtoEST() '::' name ID 'found ( MATCHING ITEM DETECTED )' if (ID == 0): # variant flag unchanged - nothing found - rerun time sleep(poll) print UTCtoEST() ':: Reloading and reparsing page ' main() else: print UTCtoEST() ':: Selecting' str(myproduct) '(' str(ID) ')' jsonurl = 'http://www supremenewyork com/shop/'+str(ID)+' json' req = urllib2 Request(jsonurl) req add_header('User-Agent' "User-Agent' 'Mozilla/5 0 (iPhone; CPU iPhone OS 6_1_4 like Mac OS X) AppleWebKit/536 26 (KHTML like Gecko) Version/6 0 Mobile/10B350 Safari/8536 25") resp = urllib2 urlopen(req) data = json loads(resp read()) found=0 for numCW in data['styles']: # COLORWAY TERMS HERE # if string1 in numCW['name'] or string2 in numCW['name']: if color in numCW['name'] title(): for sizes in numCW['sizes']: # SIZE TERMS HERE if str(sizes['name'] title()) == sz: # Medium found=1; variant=str(sizes['id']) cw=numCW['name'] print UTCtoEST() ':: Selecting size:' sizes['name'] '(' numCW['name'] ')' '(' str(sizes['id']) ')' if found ==0: # DEFAULT CASE NEEDED HERE - EITHER COLORWAY NOT FOUND OR SIZE NOT IN RUN OF PRODUCT # PICKING FIRST COLORWAY AND LAST SIZE OPTION print UTCtoEST() ':: Selecting default colorway:' data['styles'][0]['name'] sizeName=str(data['styles'][0]['sizes'][len(data['styles'][0]['sizes'])-1]['name']) variant=str(data['styles'][0]['sizes'][len(data['styles'][0]['sizes'])-1]['id']) cw=data['styles'][0]['name'] print UTCtoEST() ':: Selecting default size:' sizeName '(' variant ')' main() session=requests Session() addUrl='http://www supremenewyork com/shop/'+str(ID)+'/add json' addHeaders={ 'Host': 'www supremenewyork com' 'Accept': 'application/json' 'Proxy-Connection': 'keep-alive' 'X-Requested-With': 'XMLHttpRequest' 'Accept-Encoding': 'gzip deflate' 'Accept-Language': 'en-us' 'Content-Type': 'application/x-www-form-urlencoded' 'Origin': 'http://www supremenewyork com' 'Connection': 'keep-alive' 'User-Agent': 'Mozilla/5 0 (iPhone; CPU iPhone OS 7_1_2 like Mac OS X) AppleWebKit/537 51 2 (KHTML like Gecko) Mobile/11D257' 'Referer': 'http://www supremenewyork com/mobile' } addPayload={ 'size': str(variant) 'qty': '1' } print UTCtoEST() ' :: Adding product to cart ' addResp=session post(addUrl data=addPayload headers=addHeaders) print UTCtoEST() ' :: Checking status code of response ' if addResp status_code!=200: print UTCtoEST() ' ::' addResp status_code 'Error \nExiting ' print sys exit() else: if addResp json()==[]: print UTCtoEST() ' :: Response Empty! - Problem Adding to Cart\nExiting ' print sys exit() print UTCtoEST() ' :: '+str(cw)+' - '+addResp json()[0]['name']+' - '+ addResp json()[0]['size_name']+' added to cart!' checkoutUrl='https://www supremenewyork com/checkout json' checkoutHeaders={ 'host': 'www supremenewyork com' 'If-None-Match': '"*"' 'Accept': 'application/json' 'Proxy-Connection': 'keep-alive' 'Accept-Encoding': 'gzip deflate' 'Accept-Language': 'en-us' 'Content-Type': 'application/x-www-form-urlencoded' 'Origin': 'http://www supremenewyork com' 'Connection': 'keep-alive' 'User-Agent': 'Mozilla/5 0 (iPhone; CPU iPhone OS 7_1_2 like Mac OS X) AppleWebKit/537 51 2 (KHTML like Gecko) Mobile/11D257' 'Referer': 'http://www supremenewyork com/mobile' } ################################# # FILL OUT THESE FIELDS AS NEEDED ################################# checkoutPayload={ 'store_credit_id': '' 'from_mobile': '1' 'cookie-sub': '%7B%22'+str(variant)+'%22%3A1%7D' # cookie-sub: eg {"VARIANT":1} urlencoded 'same_as_billing_address': '1' 'order[billing_name]': 'anon mous' # FirstName LastName 'order[email]': 'anon@mailinator com' # email@domain com 'order[tel]': '999-999-9999' # phone-number-here 'order[billing_address]': '123 Seurat lane' # your address 'order[billing_address_2]': '' 'order[billing_zip]': '90210' # zip code 'order[billing_city]': 'Beverly Hills' # city 'order[billing_state]': 'CA' # state 'order[billing_country]': 'USA' # country 'store_address': '1' 'credit_card[type]': 'visa' # master or visa 'credit_card[cnb]': '9999 9999 9999 9999' # credit card number 'credit_card[month]': '01' # expiration month 'credit_card[year]': '2026' # expiration year 'credit_card[vval]': '123' # cvc/cvv 'order[terms]': '0' 'order[terms]': '1' } # GHOST CHECKOUT PREVENTION WITH ROLLING PRINT for i in range(5): sys stdout write("\r" UTCtoEST()+ ' :: Sleeping for '+str(5-i)+' seconds to avoid ghost checkout ') sys stdout flush() time sleep(1) print print UTCtoEST()+ ' :: Firing checkout request!' checkoutResp=session post(checkoutUrl data=checkoutPayload headers=checkoutHeaders) try: print UTCtoEST()+ ' :: Checkout' checkoutResp json()['status'] title()+'!' except: print UTCtoEST()+':: Error reading status key of response!' print checkoutResp json() print print checkoutResp json() if checkoutResp json()['status']=='failed': print print '!!!ERROR!!! ::' checkoutResp json()['errors'] print ```` When I want to run it everything goes correctly but at the end it is giving me this error: ````Traceback (most recent call last): File "/Users/"USERNAME"/Desktop/supreme py" line 167 in <module> checkoutResp=session post(checkoutUrl data=checkoutPayload headers=checkoutHeaders) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/requests/sessions py" line 522 in post return self request('POST' url data=data json=json **kwargs) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/requests/sessions py" line 475 in request resp = self send(prep **send_kwargs) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/requests/sessions py" line 596 in send are = adapter send(request **kwargs) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/site-packages/requests/adapters py" line 497 in send raise SSLError(e request=request) SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl c:590) ```` | As can be seen from <a href="https://www ssllabs com/ssltest/analyze html?d=www supremenewyork com" rel="nofollow">the SSLLabs report</a> the server supports only TLS 1 2 and supports only very few ciphers Given a path like `/Users/ ` in your error output I assume that you are using Mac OS X The version of OpenSSL shipped with Mac OS X and probably used by your Python is 0 9 8 which is too old to support TLS 1 2 This means that the server will not accept the SSL 3 0 or TLS 1 0 request from your client For information on how to fix this problem by updating your OpenSSL version see <a href="http://stackoverflow com/questions/18752409/updating-openssl-in-python-2-7">Updating openssl in python 2 7</a> |
Keeping a count of words in a list without using any count method in python? I need to keep a count of words in the list that appear once in a list and one list for words that appear twice without using any count method I tried using a set but it removes only the duplicate not the original Is there any way to keep the words appearing once in one list and words that appear twice in another list? the sample file is `text = ['Andy Fennimore Cooper\n' 'Peter Paul and Mary\n' 'Andy Gosling\n']` so technically Andy and Andy would be in one list and the rest in the other Using dictionaries is not allowed :/ ````for word in text: clean = clean_up(word) for words in clean split(): clean2 = clean_up(words) l = clean_list append(clean2) if clean2 not in clean_list: clean_list append(clean2) print(clean_list) ```` | If you can use a set (I would not use it either if you are not allowed to use dictionaries) then you can use the set to keep track of what words you have 'seen' and another one for the words that appear more than once Eg: ````seen = set() duplicate = set() ```` Then each time you get a word test if it is on `seen` If it is not add it to `seen` If it is in `seen` add it to `duplicate` At the end you would have a set of `seen` words containing <them>all</them> the words and a `duplicate` set with all those that appear more than once Then you only need to substract `duplicate` from `seen` and the result is the words that have no duplicates (ie the ones that appear only once) This can also be implemented using only lists (which would be more honest to your homework if a bit more laborious) |
Python: DBSCAN in 3 dimensional space I have been searching around for an implementation of DBSCAN for 3 dimensional points without much luck Does anyone know I library that handles this or has any experience with doing this? I am assuming that the DBSCAN algorithm can handle 3 dimensions by having the e value be a radius metric and the distance between points measured by euclidean separation If anyone has tried implementing this and would like to share that would also be greatly appreciated thanks | So this is what I came up with I know it is not the most efficient implementation but it works; for example the region query which is the main time eater of the algorithm computes the distance between two points more than once instead of just storing it for use later ````class DBSCAN(object): def __init__(self eps=0 min_points=2): self eps = eps self min_points = min_points self visited = [] self noise = [] self clusters = [] self dp = [] def cluster(self data_points): self visited = [] self dp = data_points c = 0 for point in data_points: if point not in self visited: self visited append(point) neighbours = self region_query(point) if len(neighbours) < self min_points: self noise append(point) else: c = 1 self expand_cluster(c neighbours) def expand_cluster(self cluster_number p_neighbours): cluster = ("Cluster: %d" % cluster_number []) self clusters append(cluster) new_points = p_neighbours while new_points: new_points = self pool(cluster new_points) def region_query(self p): result = [] for d in self dp: distance = (((d[0] - p[0])**2 (d[1] - p[1])**2 (d[2] - p[2])**2)**0 5) if distance <= self eps: result append(d) return result def pool(self cluster p_neighbours): new_neighbours = [] for n in p_neighbours: if n not in self visited: self visited append(n) n_neighbours = self region_query(n) if len(n_neighbours) >= self min_points: new_neighbours = self unexplored(p_neighbours n_neighbours) for c in self clusters: if n not in c[1] and n not in cluster[1]: cluster[1] append(n) return new_neighbours @staticmethod def unexplored(x y): z = [] for p in y: if p not in x: z append(p) return z ```` |
Custom Python Encryption algorithm Hey I have been working on this for a while and I can remebr my brother stepped me through this very same alogorithm Basicly it just adds the ascii values of both the characters from the key and the phrase I can encrypt it with this: ````def encrypt(key string): encoded = '' for i in range(len(string)): key_c = ord(key[i % len(key)]) string_c = ord(string[i % len(string)]) encoded = chr((key_c string_c) % 127) return encoded ```` But I cannot seem to remember what we did as far as decrypting Its difficult to revers a mod :P Any ideas? | Decryption is the same except with minus instead of plus |
How to extract images from a document using pypandoc into a different folder in media repository of a project in Django? I am currently trying to extract the images from a document that the user is uploading into the media repository of my Django app The code that currently works for me is: ```` html = pypandoc convert( tmp_loc 'html5' extra_args=['--extract-media='] ) ```` This correctly extracts the images into the media directory as image01 jpg<br /> In HTML the img src is: ````<img src="/media/image01 jpg" /> ```` Now the problem is that when the user uploads another docx which also has a image it replaces the previous image when it is uploaded as it is also saved by the name image01 jpg To solve this problem I thought we could just create a new folder in the media repository and name of the new folder would be the doc-name So now the code looks like this: ````html = pypandoc convert( tmp_loc 'html5' extra_args=['--extract-media=/media/<some_doc_name>'] ) ```` But the moment I run this I get the following error: ````Pandoc died with exitcode "1" during conversion: b'pandoc: /media/docs: createDirectory: permission denied (Permission denied)\n' ```` Could someone guide me what is going wrong? How to fix this? Any alternative methods of solving this problem would also be appreciated!! I am using the Pypandoc module in python | error you are getting clearly says that you do not have permission to create directory under /media/docs there maybe a multiple reasons why such a thing happens - you do not have permission to create subdirectories under "/media/docs" - just change the permissions - you have a permissions but you are running your application under other user name that does not have permissions - create group and change permission for that group - you want to extract to "media" directory under your application not the system root "/media" - your path is wrongly specified - should have "more" before e g "/home/user/program/media/docs" or "media/docs" (without leading "/") - you are trying to extract data to the non-existing subdirectory and your program cannot handle such a situation because cannot create "parent" directories - so just make sure that directory is created last thing - if you are uploading documents do not assume that they do have unique names use something unique (like primary key of the created record) or check uniqueness by validating that directory does not exist and if it is there create new one with some additional number or random text at the end |
multiple conditions to a if loop python Hello I am having some problems with some code I am writing for a school and its not working It will not run through the whole loop but all the code looks right to me Does anyone have any ideas? ```` User_Input = input ( "Please enter a message that is no longer than 7 characters " ) User_Input_Length = len ( User_Input ) if { User_Input_Length >= 7 and User_Input_Length == 1 }: print ( "Your message is odd " ) elif { User_Input_Length >= 7 and User_Input_Length == 3 }: print ( "Your message is odd " ) elif { User_Input_Legnth >= 7 and User_Input_Length == 5 }: print ( "Your message is odd " ) elif { User_Input_Length >= 7 and User_Input_Length == 7 }: print ( "Your message is odd" ) elif { User_Input_Length >= 7 and User_Input_Length == 2 }: print ( "Your message is even" ) elif { User_Input_Length >= 7 and User_Input_Length == 4 }: print ( "Your message is even" ) elif { User_Input_Length >= 7 and User_Input_Length == 6 }: print ( "Your string is even" ) else: print ( "The string is too long " ) print ( "Goodbye" ) ```` | You are not testing what you think you are Your expressions look like: ````{ User_Input_Length >= 7 and User_Input_Length == 1 } ```` In Python `{}` encloses a `set` or `dict` (or comprehension of either) This is thus a set containing one `bool` value Per <a href="https://docs python org/3/library/stdtypes html#truth-value-testing" rel="nofollow">truth value testing</a> any set that contains members is regarded as True so your first test will be the only branch taken Secondly the inner (unchecked) condition tests `User_Input_Length` for simultaneously being 7 or greater <strong>and</strong> some other value; only the one with 7 could ever be true |
PyQt QRC resource icons missing this is my `qrc_resources qrc` file ````<!DOCTYPE RCC><RCC version="1 0"> <qresource> <file alias="fault png">images/fault png</file> <file alias="schedule png">images/schedule png</file> <file alias="inventory png">images/inventory png</file> <file alias="setting png">images/setting png</file> <file alias="test png">images/test png</file> <file alias="te png">images/fault png</file> <file alias="local png">images/local png</file> <file alias="newfault png">images/newfault png</file> <file alias="newschedule png">images/newschedule png</file> <file alias="viewschedule png">images/viewschedule png</file> <file alias="lock png">images/lock png</file> </qresource> </RCC> ```` this is my `pyqt code` ````newfault = QtGui QPushButton(QtGui QIcon(":\newfault png") "Add \nNew Fault") newfault setIconSize(QtCore QSize(30 30)) viewfault = QtGui QPushButton(QtGui QIcon(":\viewfault png") "View Faults") viewfault setIconSize(QtCore QSize(30 30)) ```` but the images does not show up in the QPush Button What is causing the problem? FYI i did import qrc_resources file and not all the png files are missing If i use some of the images in resource file (for example settings png) it showed up on the button If i use `fault png test png` nothing show up <img src="http://i stack imgur com/SgRyb png" alt="enter image description here"> | my bad! i was using `:\` to import from qrc_resources In fact i have to use `:/` |
Python client error 'Connection reset by peer' I have written a very small python client to access confluence restful api I am using https protocol to connect with the confluence I am running into `Connection reset by peer` error Here is the full stack trace ````/Users/rakesh kumar/ virtualenvs/wpToConfluence py/lib/python2 7/site-packages/requests/packages/urllib3/util/ssl_ py:318: SNIMissingWarning: An HTTPS request has been made but the SNI (Subject Name Indication) extension to TLS is not available on this platform This may cause the server to present an incorrect TLS certificate which can cause validation failures You can upgrade to a newer version of Python to solve this For more information see https://urllib3 readthedocs org/en/latest/security html#snimissingwarning SNIMissingWarning /Users/rakesh kumar/ virtualenvs/wpToConfluence py/lib/python2 7/site-packages/requests/packages/urllib3/util/ssl_ py:122: InsecurePlatformWarning: A true SSLContext object is not available This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail You can upgrade to a newer version of Python to solve this For more information see https://urllib3 readthedocs org/en/latest/security html#insecureplatformwarning InsecurePlatformWarning Traceback (most recent call last): File "wpToConfluence py" line 15 in <module> main() File "wpToConfluence py" line 11 in main headers={'content-type': 'application/json'}) File "/Users/rakesh kumar/ virtualenvs/wpToConfluence py/lib/python2 7/site-packages/requests/api py" line 71 in get return request('get' url params=params **kwargs) File "/Users/rakesh kumar/ virtualenvs/wpToConfluence py/lib/python2 7/site-packages/requests/api py" line 57 in request return session request(method=method url=url **kwargs) File "/Users/rakesh kumar/ virtualenvs/wpToConfluence py/lib/python2 7/site-packages/requests/sessions py" line 475 in request resp = self send(prep **send_kwargs) File "/Users/rakesh kumar/ virtualenvs/wpToConfluence py/lib/python2 7/site-packages/requests/sessions py" line 585 in send are = adapter send(request **kwargs) File "/Users/rakesh kumar/ virtualenvs/wpToConfluence py/lib/python2 7/site-packages/requests/adapters py" line 453 in send raise ConnectionError(err request=request) requests exceptions ConnectionError: ('Connection aborted ' error(54 'Connection reset by peer')) ```` Here is my client code: ````import requests def main(): auth = open('/tmp/confluence' 'r') readline() strip() username = 'rakesh kumar' response = requests get("https://<HOST-NAME>/rest/api/content/" auth=(username auth) headers={'content-type': 'application/json'}) print response if __name__ == "__main__": main() ```` I am running this script in a virtual environment and following packages are installed on that environment: ````(wpToConfluence py)â Python pip list You are using pip version 6 1 1 however version 8 1 2 is available You should consider upgrading via the 'pip install --upgrade pip' command appnope (0 1 0) backports shutil-get-terminal-size (1 0 0) decorator (4 0 10) ipdb (0 10 1) ipython (5 0 0) ipython-genutils (0 1 0) pathlib2 (2 1 0) pexpect (4 2 0) pickleshare (0 7 3) pip (6 1 1) prompt-toolkit (1 0 5) ptyprocess (0 5 1) Pygments (2 1 3) requests (2 10 0) setuptools (25 1 6) simplegeneric (0 8 1) six (1 10 0) traitlets (4 2 2) urllib3 (1 16) wcwidth (0 1 7) ```` It does complain about the python version number but I am not sure how to update my Mac/Virtual environment python I have tried to curl command and Postman both of them work fine for the given parameters | While installing `requests` library it skips few of <a href="https://github com/kennethreitz/requests/blob/5a799dd0f505e6c6c2ff67e227f6a3d25c086342/setup py#L71" rel="nofollow">optional security packages</a> ('pyOpenSSL' 'ndg-httpsclient' and 'pyasn1') which are required for the SSL/Https connection You can fix it by either running this command ````pip install "requests[security]" ```` or ````pip install pyopenssl ndg-httpsclient pyasn1 ```` |
Python __init__ return failure to create First off I know that the `__init__()` function of a class in Python cannot return a value so sadly this option is unavailable Due to the structure of my code it makes sense to have data assertions (and prompts for the user to give information) inside the `__init__` function of the class However this means that the creation of the object can fail and I would like to be able to gracefully recover from this I was wondering what the best way to continue with this is I have considered setting a global boolean as a 'valid construction' flag but I would prefer not to Any other ideas (besides restructuring so assertions can happen outside of the initialization and values are passed in as arguments)? I am basically looking for a way to have return 0 on success and return -1 on failure during initialization (Like most C system calls) | Have you considered raising an exception? That is the usual way to signal a failure |
Who sent Mahayana missionaries to India? | null |
What is the best way to search for a large number of words in a large number of files? I have around 5000 files and I need to find words in each of them from a list of 10000 words My current code uses a (very) long regex to do it but it is very slow ````wordlist = [ list of around 10000 english words ] filelist = [ list of around 5000 filenames ] wordlistre = re compile('|' join(wordlist) re IGNORECASE) discovered = [] for x in filelist: with open(x 'r') as f: found = wordlistre findall(f read()) if found: discovered = [x found] ```` This checks files at a rate of around 5 files per second which is a lot faster than doing it manually however it is still very slow Is there a better way to do this? | If you have access to `grep` on a command line you can try the following: ````grep -i -f wordlist txt -r DIRECTORY_OF_FILES ```` You will need to create a file `wordlist txt` of all the words (one word per line) Any lines in any of your files that match any of your words will be printed to STDOUT in the following format: ````<path/to/file>:<matching line> ```` |
Find out if lat/lon coordinates fall inside a US State Using Python is there a function library or other way to check to see if a certain lat/lon coordinate pair fall within a defined geographical boundary such as a US state? Example: Does 32 781065 -96 797117 fall within Texas? | I would use the <a href="http://docs python-requests org/en/latest/" rel="nofollow">requests library</a> to send a request to the <a href="https://developers google com/maps/documentation/geocoding/#reverse-example" rel="nofollow">Google geocoding API</a> |
Extracting a word plus 20 more from a section (python) Jep still playing around with Python I decided to try out Gensim a tool to find out topics for a choosen word & context So I wondered how to find a word in a section of text and extract 20 words together with it (as in 10 words before that spectic word and 10 words after that specific word) then to save it together with other such extractions so Gensim could be run on it What seems to be hard for me is to find a way to extract the 10 before and after words when the choosen word is found I played with nltk before and by just tokenizing the text into words or sentences it was easy to get hold of the sentences Still getting those words or the sentences before and after that specific sentence seems hard for me to figure out how to do For those who are confused (it is 1am here so I may be confusing) I will show it with an example: <blockquote> As soon as it had finished all her blood rushed to her heart for she was so angry to hear that Snow-White was yet living "But now " thought she to herself "will I make something which shall destroy her completely " Thus saying she made a poisoned comb by arts which she understood and then disguising herself she took the form of an old widow She went over the seven hills to the house of the seven Dwarfs and[15] knocking at the door called out "Good wares to sell to-day!" </blockquote> If we say the word is Snow-White then I would want to get this part extracted: <blockquote> her heart for she was so angry to hear that Snow-White was yet living "But now " thought she to herself "will </blockquote> 10 word before and after Snow-White It is also cool enough to instead get the sentence before and after the sentence Snow-White appeared in if this can be done in nltk and is easier I mean whatever works best I shall be happy with one of the two solutions if someone could help me If this can be done with Gensim too and that is easier then I shall be happy with that too So any of the 3 ways will be fine I just want to try and see how this can be done because atm my head is blank | ````strs=""" As soon as it had finished all her blood rushed to her heart for she was so angry to hear that Snow-White was yet living "But now " thought she to herself "will I make something which shall destroy her completely " Thus saying she made a poisoned comb by arts which she understood and then disguising herself she took the form of an old widow She went over the seven hills to the house of the seven Dwarfs and[15] knocking at the door called out "Good wares to sell to-day!" """ spl=strs split() def ans(word): for ind x in enumerate(spl): if x strip(" ' !")==word or x strip(' " !')==word: break print(" " join(spl[ind-10:ind]+spl[ind:ind+11])) ans('Snow-White') her heart for she was so angry to hear that Snow-White was yet living "But now " thought she to herself "will ```` |
What does Article 370 say? | special provisions for the state of Jammu and Kashmir as per its Instrument of Accession |
In South East Asia, what term is commonly used to refer to Min-nan dialects? | Hokkien |
Remove Safely Reference in rdflib If I have a `rdflib Uriref` that point to a resource that I do not need any more How can i remove it safely using rdflib If for example I just remove all the triples that refer to it may be a could broke something like a Bnode that is a list | <blockquote> If I have a rdflib Uriref that point to a resource that I do not need any more How can i remove it safely using rdflib </blockquote> An RDF graph is just a collection of triples It does not contain any resources or nodes independent of those triples <blockquote> If for example I just remove all the triples that refer to it may be a could broke something like a Bnode that is a list </blockquote> Removing all the triples that use a URI resource is the correct way to "remove it from the graph" There is no way for this to "break" the graph Whether it invalidates any structure in the graph is another question but one that you would have to answer based on the structure that you are putting in the graph You would need to check in advance whether the resource appears in any triples that should not be removed |
What is the success rate for male Aeta hunters? | 17% |
Combine multiple separate lists into a list of lists Below are three lists which I would like to combine into a single nested list: ````List_1=[1 2 3] List_2=[4 5 6] List_3=[7 8 9] ```` My attempt: ````List_x=[] List_x append(List_1) List_x append(List_2) List_x append(List_3) print List_x ```` Result: ````[[1 2 3] [4 5 6] [7 8 9]] ```` Desired result: Same as the result I got but the approach is extremely slow given the size of my actual data | If you need a nested list (list of list) Concat them like this: ````>>> l1 = [1 2 3] >>> l2 = [4 5 6] >>> l3 = [7 8 9] >>> l123 = [l1 l2 l3] >>> l123 [[1 2 3] [4 5 6] [7 8 9]] ```` If you want a flattened combined list use <a href="http://docs python org/2/library/itertools html#itertools chain" rel="nofollow">`itertools chain`</a>: ````>>> l1 = [1 2 3] >>> l2 = [4 5 6] >>> l3 = [7 8 9] >>> from itertools import chain >>> list(chain(*[l1 l2 l3])) [1 2 3 4 5 6 7 8 9] ```` If memory space is a problem you can use `append`: ````>>> l1 = [[1 2 3]] >>> l1 append([4 5 6]) >>> l1 append([7 8 9]) >>> l1 [[1 2 3] [4 5 6] [7 8 9]] ```` If you want a flattened list and memory is a problem use `extend`: ````>>> l1 = [1 2 3] >>> l1 extend([4 5 6]) >>> l1 extend([7 8 9]) >>> l1 [1 2 3 4 5 6 7 8 9] ```` |
How to create a custom admin configuration panel in Django? I would like to create a configuration panel for the homepage of the web-app I am designing with Django This configuration panel should let me choose some basic options like highlighting some news setting a showcase banner and so on Basically I do not need an app with different rows but just a panel page with some configuration options The automatically generated administration area created by Django does not seem to handle this feature as far as I can see so I am asking you for some directions Any hint is highly appreciated Thank you in advance Matteo | The automatically generated administration area created by Django is for data maintenance It provides forms to edit data in your models If it does not "handle this feature" then it sounds like your "configuration panel" (configuration panel should let me choose some basic options like highlighting some news setting a showcase banner and so on) does not have any data model If you define a model with basic options like highlighting some news setting a showcase banner and so on then the Django admin will update rows in the model You can then use the model data to configure your application If -- for some reason -- you do not want to put this in the database then there will never be an automatically generated administration area created by Django |