qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
14,129,983
I need a script that updates my copy of a repository. When I type "svn up" I usually am forced to enter a password, how do I automate the password entry? What I've tried: ``` import pexpect, sys, re pexpect.run("svn cleanup") child = pexpect.spawn('svn up') child.logfile = sys.stdout child.expect("Enter passphrase for key \'/home/rcompton/.ssh/id_rsa\':") child.sendline("majorSecurityBreach") matchanything = re.compile('.*', re.DOTALL) child.expect(matchanything) child.close() ``` But it does not seem to be updating. **edit:** If it matters, I can get my repository to update with child.interact() ``` import pexpect, sys, re pexpect.run("svn cleanup") child = pexpect.spawn('svn up') child.logfile = sys.stdout i = child.expect("Enter passphrase for key \'/home/rcompton/.ssh/id_rsa\':") child.interact() ``` allows me to enter my password and starts updating. However, I end up with an error anyway. ``` -bash-3.2$ python2.7 exRepUpdate.py Enter passphrase for key '/home/rcompton/.ssh/id_rsa': At revision 4386. At revision 4386. Traceback (most recent call last): File "exRepUpdate.py", line 13, in <module> child.interact() File "build/bdist.linux-x86_64/egg/pexpect.py", line 1497, in interact File "build/bdist.linux-x86_64/egg/pexpect.py", line 1525, in __interact_copy File "build/bdist.linux-x86_64/egg/pexpect.py", line 1515, in __interact_read OSError: [Errno 5] Input/output error ``` **edit:** Alright I found a way around plaintext password entry. An important detail I left out (which, honestly, I didn't think I'd need since this seemed like it would be an easy problem) is that I had to send a public key to our IT dept. when I first got access to the repo. Avoiding the password entry with in the ssh+svn that I'm dealing with can be done with ssh-agent. This link: <http://mah.everybody.org/docs/ssh> gives an easy overview. The solution Joseph M. Reagle by way of Daniel Starin only requires I enter my password one time ever, on login, allowing me to execute my script each night despite the password entry.
2013/01/02
[ "https://Stackoverflow.com/questions/14129983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/424631/" ]
If you don't want to type password many times, but still have a secure solution you can use **ssh-agent** to keep your key passphrases for a while. If you use your default private key simply type `ssh-add` and give your passphrase when asked. More details on `ssh-add` command usage are here: [linux.die.net/man/1/ssh-add](http://linux.die.net/man/1/ssh-add)
You should really just use ssh with public keys. In the absence of that, you can simply create a new file in `~/.subversion/auth/svn.simple/` with the contents: ``` K 8 passtype V 6 simple K 999 password V 7 password_goes_here K 15 svn:realmstring V 999 <url> real_identifier K 8 username V 999 username_goes_here END ``` The 999 numbers are the length of the next line (minus `\n`). The filename should be the MD5 sum of the realm string.
2,186
23,390,397
So i've been at this one for a little while and cant seem to get it. Im trying to execute a python script via terminal and want to pass a string value with it. That way, when the script starts, it can check that value and act accordingly. Like this: ``` sudo python myscript.py mystring ``` How can i go about doing this. I know there's a way to start and stop a script using bash, but thats not really what im looking for. Any and all help accepted!
2014/04/30
[ "https://Stackoverflow.com/questions/23390397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1661607/" ]
Try the following inside ur script: ``` import sys arg1 = str(sys.argv[1]) print(arg1) ```
Since you are passing a string, you need to pass it in quotes: ``` sudo python myscript.py 'mystring' ``` Also, you shouldn't have to run it with sudo.
2,187
57,809,780
I'm trying to convert a .tif image in python using the module skimage. It's not working properly. ``` from skimage import io img = io.imread('/content/IMG_0007_4.tif') io.imsave('/content/img.jpg', img) ``` Here is the error: ``` /usr/local/lib/python3.6/dist-packages/imageio/core/functions.py in get_writer(uri, format, mode, **kwargs) if format is None: raise ValueError( "Could not find a format to write the specified file " "in mode %r" % mode) ValueError: Could not find a format to write the specified file in mode 'i' ``` EDIT 1: A method I found to do this was to open using skimage, convert it to 8bits and then save it as png. Anyway I can't save it as .jpg ``` img = io.imread('/content/IMG_0007_4.tif',as_gray=True) img8 = (img/256).astype('uint8') matplotlib.image.imsave('/content/name.png', img8) ```
2019/09/05
[ "https://Stackoverflow.com/questions/57809780", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8229169/" ]
1. I don't think HAVING will work without GROUP. 2. I would move the having clause outside the include section and use the AS aliases. So, roughly: `group: ['id'], // and whatever else you need having : { 'documents.total_balance_due' : {$eq : 0 }}` (Making some guesses vis the aliases)
> > To filter the date from joined table which uses groupby as well, you can make use of HAVING Property, which is accepted by Sequelize. > > > So with respect to your question, I am providing the answer. You can make use of this code: ``` const Sequelize = require('sequelize'); let searchQuery = { attributes: { // include everything from business table and total_due_balance as well include: [[Sequelize.fn('SUM', Sequelize.col('documents.due_balance')), 'total_due_balance']] }, include: [ { model: Documents, // table, which you require from your defined model as: 'documents', // alias through which it is defined to join in hasMany or belongsTo Associations required: true, // make inner join attributes: [] // select nothing from Documents table, if you want to select you can pass column name as a string to array } ], group: ['business.id'], // Business is a table having: '' }; if (params.contactability === 'with_balance') { searchQuery.having = Sequelize.literal(`total_due_balance > 0`); } else if (params.contactability === 'without_balance') { searchQuery.having = Sequelize.literal(`total_due_balance = 0`); } Business // table, which you require from your defined model .findAll(searchQuery) .then(result => { console.log(result); }) .catch(err => { console.log(err); }); ``` Note : Change model name or attributes according to your requirement. Hope this will help you or somebody else!
2,188
26,290,871
How can I build a python distribution RPM that is only dependent on an *earlier* version of python? **Why?** I'm trying to build a distribution RPMs for RHEL6/CentOS 6, which only includes Python 2.6, but I am building usually on machines with Python 2.7. This is an open source project, and I have already ensured that it shouldn't be including any libraries/APIs that are not in 2.6. I am building the RPMs with: ``` python setup.py bdist_rpm ``` --- **setup.py file:** ``` from distutils.core import setup setup(name='pyresttest', version='0.1', description=Text', maintainer='Not listing here', maintainer_email='no,just no', url='project url here', keywords='rest web http testing', packages=['pyresttest'], license='Apache License, Version 2.0', requires=['yaml','pycurl'] ) ``` (Specifics removed for the url, maintainer, email and description). The RPM appears to be valid, but when I try to install on RHEL6, I get this error: python(abi) = 2.7 is needed by pyresttest-0.1-1.noarch There should be some way to get it to override the default python version to require, or supply a custom SPEC file, but after several hours of fiddling with it, I'm stuck. Ideas? --- EDIT: I suppose I should clarify why I'm doing a RPM for python code, instead of just using setuptools or pip: this will hopefully go to production at work, where all deployments are RPM-based and most VMs are still RHEL6. Asking them to adopt another packaging tool is likely to be a non-starter, since our company is closely tied to the RPM format.
2014/10/10
[ "https://Stackoverflow.com/questions/26290871", "https://Stackoverflow.com", "https://Stackoverflow.com/users/95122/" ]
Re-organized the answer. Actually, there's no "rpm-package". There're rpm-packages for RHEL6, rpm-packages for FedoraNN, rpm-packagse for OpenSUSE-X.Y and so on. And besides there're Debian, Ubuntu, Arch and Gentoo :) You have the following possibilities with your Python package: 1. You may completely avoid rpm-, deb- and other "native linux packaging systems", and may opt to use a "python-native" packaging system like [PIP](https://pip.pypa.io/en/1.5.X/index.html). Thus you completely avoid the complexity and lack of compatibility between packaging systems in various versions and various flavours of Linux. And for a package which doesn't "infiltrate" deeply into "core system", this could be the best solution. 2. You may continue to use RPM as an archive format for your package but completely turn off automatic dependency calculations. This can be done with `AutoReqProv: no` directive in the spec. To be able to work with a customized spec one may use `--spec-only` and `--spec-file` [distutils options](https://docs.python.org/2.0/dist/creating-rpms.html). But remember that a package built this way is even worse than a zip from p.1: without proper dependencies it contains less necessary metainformation and thus "defames" the whole idea behind Linux packaging systems which were invented to built consistent systems, to avoid problems like "DLL hell" and to be suitable for automatic maintainance and updates. Actually you may add dependency information manually, via `Requires: <something>` tag but this may become even more hard and bporing if you target several Linux platforms at once. 3. In order to take into account all those complex and boring details and nuances of a particular package system you may create "build sandboxes" with appropriate versions of necessary Linux flavours. My preferred way to create such sandboxes is to use pre-created ["OpenVZ templates"](http://wiki.openvz.org/Download/template/precreated), but without OpenVZ per se: simply unpack a given archive into a subdirectory (being `root` to preserve permissions), then `chroot` into the subdirectory, and voila! you've got Debian, RHEL etc... Fedora people have created [Mock](http://fedoraproject.org/wiki/Projects/Mock) for the same purposes and likely `Mock` would be a more elaborated solution. As @BobMcGee suggests in the comment one also may consider [Jenkins Docker plugin](https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin) Once you have a build sandbox with python distribution specific to that system, distutils etc you may automate the build process using simple scripting, bash or python. That's it.
I do not do very much python work but have done some RPM packaging. You probably need to somehow do what one would normally do in the RPM's spec file and specify and require a particular release of your python package like so ... ``` # this would be in your spec file requires: python <= 2.6 ``` Take a look here for more info: <http://ftp.rpm.org/max-rpm/s1-rpm-depend-manual-dependencies.html>
2,189
31,910,680
I installed the networking module **Scapy**. When I import scapy (`import scapy`) everything works fine. When I import all from scapy (`from scapy.all import *`), it brings up this error: ``` Traceback (most recent call last): File "/Users/***/Downloads/test.py", line 5, in <module> from scapy.all import * File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/all.py", line 16, in <module> from .arch import * File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/__init__.py", line 75, in <module> from .bsd import * File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/bsd.py", line 12, in <module> from .unix import * File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/unix.py", line 22, in <module> from .pcapdnet import * File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/pcapdnet.py", line 22, in <module> from .cdnet import * File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/cdnet.py", line 17, in <module> raise OSError("Cannot find libdnet.so") OSError: Cannot find libdnet.so ``` I found out on another post that we might have to download additionnal modules in order to make scapy fully work. What should be done exactly? I tried using (port \*\* install) which didn't work because port is not supported anymore? If you have any idea how to make it work in python3, I will be active. Here is more additionnal informations: ``` python 3.4.3 mac os 10.10.4 scapy-python3==0.14 ``` EDIT: Another interesting thing is : On all OS except Linux libpcap should be installed for sending and receiving packets (not python modules - just C libraries). libdnet is recommended for sending packets, without libdnet packets will be sent by libpcap, which is limited. Also, netifaces module can be used for alternative and possibly cleaner way to determine local addresses. Source: <https://pypi.python.org/pypi/scapy-python3/0.11> Dnet seems to only work with version 2.7 : <https://pypi.python.org/pypi/dnet/1.12>
2015/08/10
[ "https://Stackoverflow.com/questions/31910680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4844191/" ]
**You can now install this easily** with [Homebrew](http://brew.sh) by using the command: ``` brew install libdnet ``` after you've installed Homebrew.
**Up-to-date edit: this issue has been fixed on recent versions of scapy, simply update your scapy version using `pip install scapy>=2.4.0`** You have to install libdnet. Not the python library (which does not work on python3 as you mentioned), but the library itself. There has to be library file libdnet.so somewhere in your system where python searches for libraries. Downloading the libdnet source and compiling should make it work: ``` wget http://libdnet.googlecode.com/files/libdnet-1.12.tgz tar xfz libdnet-1.12.tgz cd libdnet-1.12 ./configure make ``` Also, there is a possibility to use libpcap for sending packets and not to use libdnet, but I recommend trying to make libdnet work first.
2,190
73,920,457
How for I get the "rest of the list" after the the current element for an iterator in a loop? I have a list: `[ "a", "b", "c", "d" ]` They are not actually letters, they are words, but the letters are there for illustration, and there is no reason to expect the list to be small. For each member of the list, I need to: ``` def f(depth, list): for i in list: print(f"{depth} {i}") f(depth+1, rest_of_the_list_after_i) f(0,[ "a", "b", "c", "d" ]) ``` The desired output (with spaces for clarity) would be: ``` 0 a 1 b 2 c 3 d 2 d 1 c 2 d 1 d 0 b 1 c 2 d 1 d 0 c 1 d 0 d ``` I explored `enumerate` with little luck. The reality of the situation is that there is a `yield` terminating condition. But that's another matter. I am using (and learning with) python 3.10 This is not homework. I'm 48 :) You could also look at it like: ``` 0 a 1 b 2 c 3 d 2 d 1 c 2 d 1 d 0 b 1 c 2 d 1 d 0 c 1 d 0 d ``` That illustrates the stream nature of the thing.
2022/10/01
[ "https://Stackoverflow.com/questions/73920457", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1783593/" ]
Seems like there are plenty of answers here, but another way to solve your given problem: ```py def f(depth, l): for idx, item in enumerate(l): step = f"{depth * ' '} {depth} {item[0]}" print(step) f(depth + 1, l[idx + 1:]) f(0,[ "a", "b", "c", "d" ]) ```
``` def f(depth, alist): # you dont need this if you only care about first # for i in list: print(f"{depth} {alist[0]}") next_depth = depth + 1 rest_list = alist[1:] f(next_depth,rest_list) ``` this doesnt seem like a very useful method though ``` def f(depth, alist): # if you actually want to iterate it for i,item in enumerate(alist): print(f"{depth} {alist[0]}") next_depth = depth + 1 rest_list = alist[i:] f(next_depth,rest_list) ```
2,193
48,535,962
My data has a feature called level, and the data may have levels(-1,0,1,2,3) but my data now has only 2 levels 0 and -1. I'm using python for binary classification. How to do one-hot-encoding with all levels? What is the right approach to deal with this problem? Can I include all levels as I may expect them in test data? Or should I use only 2 levels ?
2018/01/31
[ "https://Stackoverflow.com/questions/48535962", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9186358/" ]
Currently it is assigning the last value as all parameter have same name. You can use `[]` after variable name , it will create newcoach array with all values within it. ``` $test = "newcoach[]=6&newcoach[]=11&newcoach[]=12&newcoach[]=13&newcoach[]=14"; echo '<pre>'; parse_str($test,$result); print_r($result); ``` O/p: ``` Array ( [newcoach] => Array ( [0] => 6 [1] => 11 [2] => 12 [3] => 13 [4] => 14 ) ) ```
Use this function ``` function proper_parse_str($str) { # result array $arr = array(); # split on outer delimiter $pairs = explode('&', $str); # loop through each pair foreach ($pairs as $i) { # split into name and value list($name,$value) = explode('=', $i, 2); # if name already exists if( isset($arr[$name]) ) { # stick multiple values into an array if( is_array($arr[$name]) ) { $arr[$name][] = $value; } else { $arr[$name] = array($arr[$name], $value); } } # otherwise, simply stick it in a scalar else { $arr[$name] = $value; } } # return result array return $arr; } $parsed_array = proper_parse_str($newcoach); ```
2,195
39,303,710
I am new to Python and machine learning and i am trying to work out how to fix this issue with date time. next\_unix is 13148730, because that is how many seconds are in five months, which is the time in between my dates. I have searched and i can't seem to find anything that works. ``` last_date = df.iloc[1,0] last_unix = pd.to_datetime('2015-01-31 00:00:00') +pd.Timedelta(13148730) five_months = 13148730 next_unix = last_unix + five_months for i in forecast_set: next_date = Timestamp('2015-06-30 00:00:00') next_unix += 13148730 df.loc[next_date] = [np.nan for _ in range(len(df.columns)-1)]+[i] ``` Error: ``` Traceback (most recent call last): File "<ipython-input-23-18adaa6b781f>", line 1, in <module> runfile('C:/Users/HP/Documents/machine learning.py', wdir='C:/Users/HP/Documents') File "C:\Users\HP\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile execfile(filename, namespace) File "C:\Users\HP\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 89, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/HP/Documents/machine learning.py", line 74, in <module> next_unix = last_unix + five_months File "pandas\tslib.pyx", line 1025, in pandas.tslib._Timestamp.__add__ (pandas\tslib.c:20118) ValueError: Cannot add integral value to Timestamp without offset. ```
2016/09/03
[ "https://Stackoverflow.com/questions/39303710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2770803/" ]
If my understanding is correct then you can get desired result with the following: ``` SELECT i.*, CASE WHEN prop1.PROPERTY_ID = 1 THEN prop1.VALUE ELSE '' END AS PROPERTY_ONE, CASE WHEN prop1.PROPERTY_ID = 2 THEN prop1.VALUE ELSE '' END AS PROPERTY_TWO FROM ITEM i LEFT JOIN ITEM_PROPERTY prop1 on i.ITEM_ID = prop1.ITEM_D AND prop1.PROPERTY_ID IN (1, 2) ```
``` Select i.*, GROUP_CONCAT(prop.VALUE) as PROPERTY_VALUE From ITEM i Left Join ITEM_PROPERTY prop on i.ITEM_ID = prop.ITEM_D ```
2,198
50,693,966
I have a directory containing many images(\*.jpg). Each image has a name. In the same directory i have a file containing python code(below). ``` import numpy as np import pandas as pd import glob fd = open('melanoma.csv', 'a') for img in glob.glob('*.jpg'): dataFrame = pd.read_csv('allcsv.csv') name = dataFrame['name'] for i in name: #print(i) if(i+'.jpg' == img): print(i) ``` In the same directory i have another file(allcsv.csv) containing large amount of csv data for all images in the directory and many other images also. The above code compares the names of images with the name column in the allcsv.csv file and prints the names. I need to modify this code to write all the data in a row of the compared images into a file named 'melanoma.csv'. eg: **allcsv.csv** ``` name,age,sex ISIC_001,85,female ISIC_002,40,female ISIC_003,30,male ISIC_004,70,female ``` *if the folder has the images only for ISIC\_002 and ISIC\_003* **melanoma.csv** ``` name,age,sex ISIC_002,40,female ISIC_003,30,male ```
2018/06/05
[ "https://Stackoverflow.com/questions/50693966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6612871/" ]
First, your code reads the .csv file once for every image. Second, you have a nested `for`-loop. Both is not ideal. I recommend the following approach: **Step 1 - Create list of image file names** ``` import glob image_names = [f.replace('.jpg', '') for f in glob.glob("*.jpg")] ``` **Step 2 - Create dataframe with patient names** ``` import pandas df_patients = pd.read_csv('allcsv.csv') ``` **Step 3 - Filter healthy patients and dump to csv** ``` df_sick = df_patients[df_patients['name'].isin(image_names)] df_sick.to_csv('melanoma.csv', index = False) ``` **Step 4 - Print names of sick patients** ``` for rows in df_sick.iterrows(): print row.name, 'has cancer' ```
This is just a solution for storing the matched values to a new file melanoma.csv. Your code can be further improved and optimized. ``` import numpy as np import pandas as pd import glob # Create a dictionary object d={} for img in glob.glob('*.jpg'): dataFrame = pd.read_csv('allcsv.csv') name = dataFrame['name'] for i in name: #print(i) if(i+'.jpg' == img): # update dictionary d everytime a match is found with all the required values d['name'] = i d['age']= dataFrame['age'] d['sex'] = dataFrame['sex'] # convert dictionary d to dataframe df = pd.DataFrame(d, columns=d.keys()) #Save dataframe to csv df.to_csv('--file path--/melanoma.csv') ```
2,202
39,771,366
I am a beginner in python. However, I have some problems when I try to use the readline() method. ``` f=raw_input("filename> ") a=open(f) print a.read() print a.readline() print a.readline() print a.readline() ``` and my txt file is ``` aaaaaaaaa bbbbbbbbb ccccccccc ``` However, when I tried to run it on a Mac terminal, I got this: ``` aaaaaaaaa bbbbbbbbb ccccccccc ``` It seems that readline() is not working at all. But when I disable print a.read(), the readline() gets back to work. This confuses me a lot. Is there any solution where I can use read() and readline() at the same time?
2016/09/29
[ "https://Stackoverflow.com/questions/39771366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6899656/" ]
When you open a file you get a pointer to some place of the file (by default: the begining). Now whenever you run `.read()` or `.readline()` this pointer moves: 1. `.read()` reads until the end of the file and moves the pointer to the end (thus further calls to any reading gives nothing) 2. `.readline()` reads until newline is seen and sets the pointer after it 3. `.read(X)` reads X bytes and sets the pointer at `CURRENT_LOCATION + X` (or the end) If you wish you can manually move that pointer by issuing `a.seek(X)` call where `X` is a place in file (seen as an array of bytes). For example this should give you the desired output: ``` print a.read() a.seek(0) print a.readline() print a.readline() print a.readline() ```
You need to understand the concept of file pointers. When you read the file, it is fully consumed, and the pointer is at the end of the file. > > It seems that the readline() is not working at all. > > > It is working as expected. There are no lines to read. > > when I disable print a.read(), the readline() gets back to work. > > > Because the pointer is at the beginning of the file, and the lines can be read > > Is there any solution that I can use read() and readline() at the same time? > > > Sure. Flip the ordering of reading a few lines, then the remainder of the file, or seek the file pointer back to a position that you would like. Also, don't forget to close the file when you are finished reading it
2,203
883,313
On a django site, I want to generate an excel file based on some data in the database. I'm thinking of using [xlwt](http://pypi.python.org/pypi/xlwt), but it only has a method to save the data to a file. How can get the file to the HttpResponse object? Or maybe do you know a better library? I've also found this [snippet](http://www.djangosnippets.org/snippets/1151/) but it doesn't do what I need. All I want is a way to get the stream from the xlwt object to the response object (without writing to a temporary file)
2009/05/19
[ "https://Stackoverflow.com/questions/883313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/92763/" ]
You can save your XLS file to a [StringIO](http://docs.python.org/library/stringio.html) object, which is file-like. You can return the StringIO object's `getvalue()` in the response. Be sure to add headers to mark it as a downloadable spreadsheet.
If your data result doesn't need formulas or exact presentation styles, you can always use CSV. any spreadsheet program would directly read it. I've even seen some webapps that generate CSV but name it as .XSL just to be sure that Excel opens it
2,206
14,484,386
I'm interrogating a nested dictionary using the dict.get('keyword') method. Currently my syntax is... ``` M = cursor_object_results_of_db_query for m in M: X = m.get("gparents").get("parent").get("child") for x in X: y = x.get("key") ``` However, sometimes one of the "parent" or "child" tags doesn't exist, and my script fails. I know using `get()` I can include a default in the case the key doesn't exist of the form... ``` get("parent", '') or get("parent", 'orphan') ``` But if I include any `Null`, `''`, or empty I can think of, the chained `.get("child")` fails when called on `''.get("child")` since `""` has no method `.get()`. The way I'm solving this now is by using a bunch of sequential `try-except` around each `.get("")` call, but that seems foolish and unpython---is there a way to default return `"skip"` or `"pass"` or something that would still support chaining and fail intelligently, rather than deep-dive into keys that don't exist? Ideally, I'd like this to be a list comprehension of the form: ``` [m.get("gparents").get("parent").get("child") for m in M] ``` but this is currently impossible when an absent parent causes the `.get("child")` call to terminate my program.
2013/01/23
[ "https://Stackoverflow.com/questions/14484386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1052117/" ]
Since these are all python `dict`s and you are calling the `dict.get()` method on them, you can use an empty `dict` to chain: ``` [m.get("gparents", {}).get("parent", {}).get("child") for m in M] ``` By leaving off the default for the last `.get()` you fall back to `None`. Now, if any of the intermediary keys is not found, the rest of the chain will use empty dictionaries to look things up, terminating in `.get('child')` returning `None`.
Another approach is to recognize that if the key isn't found, `dict.get` returns `None`. However, `None` doesn't have an attribute `.get`, so it will throw an `AttributeError`: ``` for m in M: try: X = m.get("gparents").get("parent").get("child") except AttributeError: continue for x in X: y = x.get("key") #do something with `y` probably??? ``` Just like Martijn's answer, this doesn't guarantee that `X` is iterable (non-`None`). Although, you could fix that by making the last `get` in the chain default to returning an empty list: ``` try: X = m.get("gparents").get("parent").get("child",[]) except AttributeError: continue ``` --- Finally, I think that probably the best solution to this problem is to use `reduce`: ``` try: X = reduce(dict.__getitem__,["gparents","parent","child"],m) except (KeyError,TypeError): pass else: for x in X: #do something with x ``` The advantage here is that you know if any of the `get`s failed based on the type of exception that was raised. It's possible that a `get` returns the wrong type, then you get a `TypeError`. If the dictionary doesn't have the key however, it raises a `KeyError`. You can handle those separately or together. Whatever works best for your use case.
2,216
20,375,954
I have a large collection of images which I'm trying to sort according to quality by crowd-sourcing. Images can be assigned 1, 2, 3, 4, or 5 stars according to how much the user likes them. A 5-star image would be very visually appealing, a 1-star image might be blurry and out of focus. At first I created a page showing an image with the option to rate it directly by choosing 1-5 stars. But it was too time-consuming to to do this. I'd like to try to create an interface where the user is presented with 2 images side by side and asked to click the image s/he likes more. Using this comparison data of one image compared to another, is there then some way to convert it over to a score of 1-5? What kind of algorithm would allow me to globally rank images by comparing them only to each other, and how could I implement it in python?
2013/12/04
[ "https://Stackoverflow.com/questions/20375954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/216605/" ]
Sounds like you need a ranking algorithm similar to what is used in sport to rank players. Think of the comparison of two images as a match and the one the user selects as the better one is the winner of the match. After some time, many players have played many matches and sometimes against the same person. They win some they lose some eh? How do you rank which is the best overall? You can look at the [Elo Rating System](http://en.wikipedia.org/wiki/Elo_rating_system). which is used in chess to rank chess players. There is an algorithm specified so it should be a matter of implementing in your language of choice.
Let each image start with a ranking of 3 (the mean of 1 … 5), then for each comparison (which wasn't equal) lower the rank of the loser image and increase the rank of the winner image. I propose to simply *count* the +1s and the -1s, so that you have a number of wins and a number of losses for each image. Then the value 1 … 5 could be calculated as: ``` import math def rank(wins, losses): return 3 + 4 * math.atan(wins - losses) / math.pi ``` This will rank images higher and higher with each win, but it will lead to the silly situation that (+1010 / -1000) will be ranked alike with (+10 / -0) which is not desirable. One can remedy this flaw by using a mean of the values: ``` def rank(wins, losses): return (3 + 4 * math.atan((wins - losses) / (wins + losses) * 10) / math.pi if wins + losses > 0 else 3) ``` Both curves will never *quite* reach 1 or 5, but they will come ever closer if an image always wins or always loses.
2,219
51,865,923
I have been trying out DroneKit Python and have been working with some of the examples provided. Having got to a point of some knowledge of working with DroneKit I have started writing some python code to perform a single mission. My only problem is that the start location for my missions are always defaulting to `Lat = -35.3632605, Lon = 149.1652287` - even though I have set the home location to the following: ``` start_location = LocationGlobal(51.945102, -2.074558, 10) vehicle.home_location = start_location ``` Is there something else in the api I am missing out on doing in order to set the start location of the drone in the simulation environment?
2018/08/15
[ "https://Stackoverflow.com/questions/51865923", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10231182/" ]
If you really don't want to wrap you can use `@media` queries to change `flex-direction` of your quizlist class to `column`. ```css input[type="radio"] { display: none; } input[type="radio"]:checked+.quizlabel { border: 2px solid #0052e7; transition: .1s; background-color: #0052e7; box-shadow: 2px 2px 3px #c8c8c8; color: #fff; } input[type="radio"]:hover+.quizlabel { border: 2px solid #0052e7; } .quizlabel { padding: 5px; margin: 5px; border: 2px solid #484848; color: #000; font-family: sans-serif; font-size: 14px; } .quizform>div { padding: 10px; margin-top: 10px; } .quizlabel:first-of-type { margin-left: 0; } .quizform { padding: 10px; font-family: sans-serif; } .quizform p { margin: 2px; font-weight: Bold; } .quizrow:nth-of-type(odd) { background-color: #e2e3e5; } .quizlist { display: flex; justify-content: flex-start; flex-direction: row; flex-wrap: nowrap; margin: 0 padding: 0; } @media (max-width: 500px){ .quizlist { display: flex; flex-direction: column; flex-wrap: nowrap; text-align: center; } } #result_div { font-family: sans-serif; color: #000; border: 3px solid #000; padding: 10px; } #result_div p { color: #000; } .quiz-submit { font-family: sans-serif; color: #fff; background-color: #000; padding: 10px; cursor: pointer; } .quiz-submit:hover { background-color: #0052e7; } ``` ```html <form name="quizform" class="quizform"> <div class="quizrow"> <p>Q1</p> <div class="quizlist"> <input type="radio" name="q1" value="1" id="q1-1"><label for="q1-1" class="quizlabel"><span>Strongly Disagree</span></label> <input type="radio" name="q1" value="2" id="q1-2"><label for="q1-2" class="quizlabel"><span>Disagree</span></label> <input type="radio" name="q1" value="3" id="q1-3"><label for="q1-3" class="quizlabel">Neutral</label> <input type="radio" name="q1" value="4" id="q1-4"><label for="q1-4" class="quizlabel">Agree</label> <input type="radio" name="q1" value="5" id="q1-5"><label for="q1-5" class="quizlabel">Strongly Agree</label> </div> </div> <div class="quizrow"> <p>Q2</p> <div class="quizlist"> <input type="radio" name="q2" value="1" id="q2-1"><label for="q-1" class="quizlabel"><span>Strongly Disagree</span></label> <input type="radio" name="q2" value="2" id="q2-2"><label for="q1-2" class="quizlabel"><span>Disagree</span></label> <input type="radio" name="q2" value="3" id="q2-3"><label for="q1-3" class="quizlabel">Neutral</label> <input type="radio" name="q2" value="4" id="q2-4"><label for="q1-4" class="quizlabel">Agree</label> <input type="radio" name="q2" value="5" id="q2-5"><label for="q1-5" class="quizlabel">Strongly Agree</label> </div> </div> <p></p> <button type="submit" class="quiz-submit">Submit</button> <div>&nbsp;</div> <div>&nbsp;</div> <div id="result_div" style="display:none;"> <p id="result_text"></p> </div> </form> ```
I think this is what you're aiming for? The boxes weren't getting smaller because of the text inside of them, so you needed to add `flex-wrap:wrap;` to the `.quizlist` so that way they would go onto the next row. You also needed to add a `flex` and `flex-grow` to specify the widths you want them to flex to. If you don't want them to increase widths to match that of the screen size, then remove the `flex-grow`. ```css input[type="radio"] { display: none; } input[type="radio"]:checked+.quizlabel { border: 2px solid #0052e7; transition: .1s; background-color: #0052e7; box-shadow: 2px 2px 3px #c8c8c8; color: #fff; } input[type="radio"]:hover+.quizlabel { border: 2px solid #0052e7; } .quizlabel { padding: 5px; margin: 5px; border: 2px solid #484848; color: #000; font-family: sans-serif; font-size: 14px; flex: 0 0 5%; flex-grow: 1; } .quizform>div { padding: 10px; margin-top: 10px; } .quizlabel:first-of-type { margin-left: 0; } .quizform { padding: 10px; font-family: sans-serif; } .quizform p { margin: 2px; font-weight: Bold; } .quizrow:nth-of-type(odd) { background-color: #e2e3e5; } .quizlist { display: flex; justify-content: flex-start; flex-direction: row; flex-wrap: wrap; margin: 0; padding: 0; } #result_div { font-family: sans-serif; color: #000; border: 3px solid #000; padding: 10px; } #result_div p { color: #000; } .quiz-submit { font-family: sans-serif; color: #fff; background-color: #000; padding: 10px; cursor: pointer; } .quiz-submit:hover { background-color: #0052e7; } ``` ```html <form name="quizform" class="quizform"> <div class="quizrow"> <p>Q1</p> <div class="quizlist"> <input type="radio" name="q1" value="1" id="q1-1"><label for="q1-1" class="quizlabel"><span>Strongly Disagree</span></label> <input type="radio" name="q1" value="2" id="q1-2"><label for="q1-2" class="quizlabel"><span>Disagree</span></label> <input type="radio" name="q1" value="3" id="q1-3"><label for="q1-3" class="quizlabel">Neutral</label> <input type="radio" name="q1" value="4" id="q1-4"><label for="q1-4" class="quizlabel">Agree</label> <input type="radio" name="q1" value="5" id="q1-5"><label for="q1-5" class="quizlabel">Strongly Agree</label> </div> </div> <div class="quizrow"> <p>Q2</p> <div class="quizlist"> <input type="radio" name="q2" value="1" id="q2-1"><label for="q-1" class="quizlabel"><span>Strongly Disagree</span></label> <input type="radio" name="q2" value="2" id="q2-2"><label for="q1-2" class="quizlabel"><span>Disagree</span></label> <input type="radio" name="q2" value="3" id="q2-3"><label for="q1-3" class="quizlabel">Neutral</label> <input type="radio" name="q2" value="4" id="q2-4"><label for="q1-4" class="quizlabel">Agree</label> <input type="radio" name="q2" value="5" id="q2-5"><label for="q1-5" class="quizlabel">Strongly Agree</label> </div> </div> <p></p> <button type="submit" class="quiz-submit">Submit</button> </form> ```
2,222
71,949,010
After I install Google cloud sdk in my computer, I open the terminal and type "gcloud --version" but it says "python was not found" note: I unchecked the box saying "Install python bundle" when I install Google cloud sdk because I already have python 3.10.2 installed. so, how do fix this? Thanks in advance.
2022/04/21
[ "https://Stackoverflow.com/questions/71949010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17138122/" ]
As mentioned in the [document](https://cloud.google.com/sdk/docs/install-sdk#windows): > > Cloud SDK requires Python; supported versions are Python 3 (preferred, > 3.5 to 3.8) and Python 2 (2.7.9 or later). By default, the Windows version of Cloud SDK comes bundled with Python 3 and Python 2. To use > Cloud SDK, your operating system must be able to run a supported > version of Python. > > > As suggested by @John Hanley the CLI cannot find Python which is already installed. Try reinstalling the CLI selecting **install Python bundle**. If you are still facing the issue another workaround can be to try with Python version 2.x.x . You can follow the below steps : 1.Uninstall all Python version 3 and above. 2.Install Python version -2.x.x (I have installed - 2.7.17) 3.Create environment variable - CLOUDSDK\_PYTHON and provide value as C:\Python27\python.exe 4.Run GoogleCloudSDKInstaller.exe again.
On ubuntu Linux, you can define this variable in the `.bashrc` file: ```bash export CLOUDSDK_PYTHON=/usr/bin/python3 ```
2,223
15,866,765
What is the recommended library for web client programming which involves HTTP requests. I know there is a package called [HTTP](https://github.com/haskell/HTTP) but it doesn't seem to support HTTPS. Is there any better library for it ? I expect a library with functionality something like [this](http://docs.python-requests.org/en/latest/) for Haskell.
2013/04/07
[ "https://Stackoverflow.com/questions/15866765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1651941/" ]
[`Network.HTTP.Conduit`](http://hackage.haskell.org/package/http-conduit) has a clean API (it uses [`Network.HTTP.Types`](http://hackage.haskell.org/package/http-types)) and is quite simple to use if you know a bit about conduits. Example: ```hs {-# LANGUAGE OverloadedStrings #-} module Main where import Data.Conduit import Network.HTTP.Conduit import qualified Data.Aeson as J main = do manager <- newManager def initReq <- parseUrl "https://api.github.com/user" let req = applyBasicAuth "niklasb" "password" initReq resp <- runResourceT $ httpLbs req manager print (responseStatus resp) print (lookup "content-type" (responseHeaders resp)) -- you will probably want a proper FromJSON instance here, -- rather than decoding to Data.Aeson.Object print (J.decode (responseBody resp) :: Maybe J.Object) ``` Also make sure to [consult the tutorial](https://haskell-lang.org/library/http-client).
In addition to `Network.HTTP.Conduit` there [`Network.Http.Client`](http://hackage.haskell.org/package/http-streams) which exposes an [`io-streams`](http://hackage.haskell.org/package/io-streams-1.0.1.0) interface.
2,224
11,923,645
Lets say i have this code: ``` def dosomething(thing1, thing2=hello, thing3=world): print thing1 print thing2 print thing3 ``` I would like to be able to specify what thing3 is, but wihout having to say what thing2 is. (The code below is how i thought it might work...) ``` dosomething("This says 'hello fail!'", , 'fail!') ``` and it would say ``` This says 'hello fail!' hello fail! ``` So is there a way to do it like that, or would i have to specify `thing2` every time i wanted to say what `thing3` was? I am using python2, if that matters.
2012/08/12
[ "https://Stackoverflow.com/questions/11923645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1542540/" ]
Use keyword arguments ``` dosomething("This says 'hello fail!'", thing3='fail!') ```
Yes, you can: ``` dosomething("This says 'hello fail!'", thing3 = 'fail!') ```
2,230
32,550,447
If I have a set of integers which denote the values that a list element can take and a python list of a given length. I want to fill the list with all possible combinations. **example** > > list `length=3` and the `my_set ={1,-1}` > > > **Possible combinations** ``` [1,1,1],[1,1,-1],[1,-1,1],[1,-1,-1], [-1,1,1],[-1,1,-1],[-1,-1,1],[-1,-1,-1] ``` I tried approaching with random.sample method from random class but it doesn't help. I did: ``` my_set=[1,-1] from random import sample as sm print sm(my_set,1) #Outputs: -1,-1,1,1 and so on..(random) print sm(my_set,length_I_require) #Outputs**:Error ```
2015/09/13
[ "https://Stackoverflow.com/questions/32550447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4355529/" ]
That's what `itertools.product` is for : ``` >>> from itertools import product >>> list(product({1,-1},repeat=3)) [(1, 1, 1), (1, 1, -1), (1, -1, 1), (1, -1, -1), (-1, 1, 1), (-1, 1, -1), (-1, -1, 1), (-1, -1, -1)] >>> ``` And if you want the result as list you can use `map` to convert the iterator of tuples to list if list (in python3 it returns an iterator which as a more efficient way you can use a list comprehension ): ``` >>> map(list,product({1,-1},repeat=3)) [[1, 1, 1], [1, 1, -1], [1, -1, 1], [1, -1, -1], [-1, 1, 1], [-1, 1, -1], [-1, -1, 1], [-1, -1, -1]] ``` In python3 : ``` >>> [list(pro) for pro in product({1,-1},repeat=3)] [[1, 1, 1], [1, 1, -1], [1, -1, 1], [1, -1, -1], [-1, 1, 1], [-1, 1, -1], [-1, -1, 1], [-1, -1, -1]] >>> ```
Use the [`itertools.product()` function](https://docs.python.org/3/library/itertools.html#itertools.combinations): ``` from itertools import product result = [list(combo) for combo in product(my_set, repeat=length)] ``` The `list()` call is optional; if tuples instead of lists are fine to, then `result = list(product(my_set, repeat=length))` suffices. Demo: ``` >>> from itertools import product >>> length = 3 >>> my_set = {1, -1} >>> list(product(my_set, repeat=length)) [(1, 1, 1), (1, 1, -1), (1, -1, 1), (1, -1, -1), (-1, 1, 1), (-1, 1, -1), (-1, -1, 1), (-1, -1, -1)] >>> [list(combo) for combo in product(my_set, repeat=length)] [[1, 1, 1], [1, 1, -1], [1, -1, 1], [1, -1, -1], [-1, 1, 1], [-1, 1, -1], [-1, -1, 1], [-1, -1, -1]] ``` `random.sample()` gives you a random subset of the given input sequence; it doesn't produce all possible combinations of values.
2,231
64,087,848
I'm trying to check how much times does some value repeat in a row but I ran in a problem where my code is leaving the last number without checking it. ``` Ai = input() arr = [int(x) for x in Ai.split()] c = 0 frozen_num = arr[0] for i in range(0,len(arr)): print(arr) if frozen_num == arr[0]: arr.remove(arr[0]) c+=1 else: frozen_num = arr[0] print(c) ``` So let's say I enter: 1 1 1 1 5 5 My code will give an output 5 and not 6 I hope you understand what I'm saying. I'm pretty new to python and also this code is not finished, later numbers will be appended so I get the output: [4, 2] because "1" repeats 4 times and "5" 2 times. Edited - I accidentally wrote 6 and 7 and not 5 and 6.
2020/09/27
[ "https://Stackoverflow.com/questions/64087848", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12733326/" ]
You could use the `Counter` of the `Collections` module to measure all the occurrences of different numbers. ``` from collections import Counter arr = list(Counter(input().split()).values()) print(arr) ``` Output with an input of `1 1 1 1 5 5`: ``` 1 1 1 1 5 5 [4, 2] ```
If you want to stick with your method and not use external libraries, you can add an if statement that detects when you reach the last element of your array and process it differently from the others: ``` Ai=input() arr = [int(x) for x in Ai.split()] L=[] c = 0 frozen_num = arr[0] for i in range(0, len(arr)+1): print(arr) if len(arr)==1: #If we reached the end of the array if frozen_num == arr[0]: #if the last element of arr is the same as the previous one c+=1 L.append(c) else: #if the last element is different, just append 1 to the end of the list L.append(c) L.append(1) elif frozen_num == arr[0]: arr.remove(arr[0]) c += 1 else: L.append(c) c=0 frozen_num = arr[0] print(L) ``` input ``` [5,5,5,6,6,1] ``` output ``` [3,2,1] ```
2,234
49,813,481
I am trying to fit some data that I have using scipy.optimize.curve\_fit. My fit function is: ``` def fitfun(x, a): return np.exp(a*(x - b)) ``` What i want is to define `a` as the fitting parameter, and `b` as a parameter that changes depending on the data I want to fit. This means that for one set of data I would want to fit the function: `np.exp(a*(x - 10))` while for another set I would like to fit the function `np.exp(a*(x - 20))`. In principle, I would like the parameter b to be passed in as any value. The way I am currently calling curve\_fit is: ``` coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) ``` But what I would like would be something like this: ``` b=10 coeffs, coeffs_cov = curve_fit(fitfun(b), xdata, ydata) b=20 coeffs2, coeffs_cov2 = curve_fit(fitfun(b), xdata, ydata) ``` So that I get the coefficient a for both cases (b=10 and b=20). I am new to python so I cannot make it work, even though I have tried to read the documentation. Any help would be greatly appreciated.
2018/04/13
[ "https://Stackoverflow.com/questions/49813481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7654219/" ]
I don't know if this is the "proper" way of doing things, but I usually wrap my function in a class, so that I can access parameters from `self`. Your example would then look like: ``` class fitClass: def __init__(self): pass def fitfun(self, x, a): return np.exp(a*(x - self.b)) inst = fitClass() inst.b = 10 coeffs, coeffs_cov = curve_fit(inst.fitfun, xdata, ydata) inst.b = 20 coeffs, coeffs_cov = curve_fit(inst.fitfun, xdata, ydata) ``` This approach avoids using global parameters, which are [generally considered evil](http://www.learncpp.com/cpp-tutorial/4-2a-why-global-variables-are-evil/).
You can define `b` as a global variable inside the fit function. ``` from scipy.optimize import curve_fit def fitfun(x, a): global b return np.exp(a*(x - b)) xdata = np.arange(10) #first sample data set ydata = np.exp(2 * (xdata - 10)) b = 10 coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) print(coeffs) #second sample data set ydata = np.exp(5 * (xdata - 20)) b = 20 coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) print(coeffs) ``` Output: ``` [2.] [5.] ```
2,235
63,153,688
I edited this post so that i could give more info about the goal I am trying to achieve. basically I want to be able to open VSCode in a directory that I can input inside a python file I am running trhough a shell command i created. So what I need is for the python file to ask me for the name of the folder I want to open, pass that information to the terminal so that it can then cd into that folder and open vscode automatically. I tried with os.system() that is, as I read, one of the ways I can achieve that goal. The problem is that if I use standard commands like os.system('date') or os.system('code') it works without any problem. If I try to use os.system(cd /directory/) nothing happens. As suggested I also tryied with `subprocess.call(["cd", "/home/simon/Desktop"])` but the terminal gives me the error: `FileNotFoundError: [Errno 2] No such file or directory: 'cd'` I am going to include both the python file: ``` import os, subprocess PATH = "/home/simon/Linux_Storage/Projects" def main(): print("\n") print("********************") for folder in os.listdir(PATH): print(folder) print("********************") project = input("Choose project: ") print("\n") folders = os.listdir(PATH) while project: if project in folders: break else: print("Project doesn't exist.") project = input("Choose project: ") os.system(f"cd /home/simon/Linux_Storage/Projects/{project}") if __name__ == "__main__": main() ``` and the shell script (maybe I should change something here): ``` function open() { python3 .open.py code . } ```
2020/07/29
[ "https://Stackoverflow.com/questions/63153688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12288571/" ]
Store dataValue in some variable and use expectation to wait for your closure to execute and then test. Note: This example was written in swift 4 ``` let yourExpectationName = expectation(description: "xyz") var dataToAssert = [String]() //replace with you data type sut.apiSuccessClouser = { dataValue in dataToAssert = dataValue yourExpectationName.fulfill() } waitForExpectations(timeout: 3) { (error) in //specify wait time in seconds XCTAssert(dataToAssert) } ```
apiSuccessClouser in MockApiService is a property of type closure `(()->Void?)?`. In line `sut.apiSuccessClouser = { ... }` you assign the the property apiSuccessClouser a closure but you never access this closure so that the `print("apiSuccessClouser")` to be executed. to execute the print("apiSuccessClouser") you need to call the closure ``` sut.apiSuccessClouser?() ``` So refactor the test like: ``` func test_fetch_photo() { sut.apiSuccessClouser = { dataValue in print("apiSuccessClouser") // This doesnot executes XCTAssert(dataValue) } sut.apiSuccessClouser?() } ``` for more info: <https://docs.swift.org/swift-book/LanguageGuide/Closures.html>
2,243
54,060,243
Hi ultimately I'm trying to install django on my computer, but I'm unable to do this as the when I run pip in the command line I get the following error message: `''pip' is not recognized as an internal or external command, operable program or batch file.'` I've added the following locations to my path environment: '`C:\Python37-32;C:\Python37-32\Lib\site-packages;C:\Python37-32\Scripts'` I've also tried to reinstall pip using 'py -3.7 -m ensurepip -U --default-pip', but then I get the following error message: `'Requirement already up-to-date: setuptools in c:\users\tom_p\anaconda3\lib\site-packages (40.6.3) Requirement already up-to-date: pip in c:\users\tom_p\anaconda3\lib\site-packages (18.1) spyder 3.3.2 requires pyqt5<5.10; python_version >= "3", which is not installed. xlwings 0.15.1 has requirement pywin32>=224, but you'll have pywin32 223 which is incompatible.'` I'm new to this so I'm struggling with the install and I'm confused by the fact pip is in C:\Python37-32\Scripts, but the above error seems to be looking in the anaconda folder. The only reason I installed anaconda was to use the Spyder IDE. I've installed python 3.7 32-bit on my Windows 10, any help would be much appreciated. Thanks
2019/01/06
[ "https://Stackoverflow.com/questions/54060243", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9815902/" ]
You can put a conditional expression on a single item update to make the update fail if the condition is not met. However it will not fail an entire batch, just the single update. The batch update response would contain information on which updates succeeded and which failed
It's possible to do it, by using conditional expression for filter expression. But please don't do it. DynamoDB is a key-value NoSQL. It means that you can get the right data by keys only. If you do the filter, it will loop through a lot of records and slow down you app. You can check this article: [5 things that you should know about DynamoDB](https://problemlover.com/5-things-that-you-should-know-before-using-dynamodb-for-your-project/) So when you CRUD the data, the recommended way to interact with data is by key. I can translate it to pseudo code like this: ``` GET: SELECT * FROM THE TABLE WHERE Id ='SampleId' UPDATE: UPDATE THE ITEM WHERE id = 'SampleId' DELETE: DELETETHE ITEM WHERE id = 'SampleId' ``` To satisfy your needs, you need to use elastic search to get the right items, after that you can update the data by key.
2,245
98,687
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework. The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect. What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java. The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results. Edited to add: After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
2008/09/19
[ "https://Stackoverflow.com/questions/98687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2168/" ]
IMO, the "more obvious/more idiomatic/better solution" is to use an existing ORM rather than invent DAO-like classes. It appears to me that ORM's are more popular than "raw" SQL connections. Why? Because Python *is* OO, and the mapping from a SQL row to an object *is* absolutely essential. There aren't many use cases where you deal with SQL rows that don't map to Python objects. I think that [SQLAlchemy](http://www.sqlalchemy.org/) or [SQLObject](http://www.sqlobject.org/) (and the associated connection pooling) are the more idiomatic Pythonic solutions. Pooling as a separate feature isn't very common because pure SQL (without object mapping) isn't very popular for the kind of complex, long-running processes that benefit from connection pooling. Yes, pure SQL *is* used, but it's always used in simpler or more controlled applications where pooling isn't helpful. I think you might have two alternatives: 1. Revise your classes to use SQLAlchemy or SQLObject. While this appears painful at first (all that work wasted), you should be able to leverage all the design and thought. It's merely an exercise in adopting a widely-used ORM and pooling solution. 2. Roll out your own simple connection pool using the algorithm you outlined -- a simple Set or List of connections that you cycle through.
i did it for opensearch so you can refer it. ``` from opensearchpy import OpenSearch def get_connection(): connection = None try: connection = OpenSearch( hosts=[{'host': settings.OPEN_SEARCH_HOST, 'port': settings.OPEN_SEARCH_PORT}], http_compress=True, http_auth=(settings.OPEN_SEARCH_USER, settings.OPEN_SEARCH_PASSWORD), use_ssl=True, verify_certs=True, ssl_assert_hostname=False, ssl_show_warn=False, ) except Exception as error: print("Error: Connection not established {}".format(error)) else: print("Connection established") return connection class OpenSearchClient(object): connection_pool = [] connection_in_use = [] def __init__(self): if OpenSearchClient.connection_pool: pass else: OpenSearchClient.connection_pool = [get_connection() for i in range(0, settings.CONNECTION_POOL_SIZE)] def search_data(self, query="", index_name=settings.OPEN_SEARCH_INDEX): available_cursor = OpenSearchClient.connection_pool.pop(0) OpenSearchClient.connection_in_use.append(available_cursor) response = available_cursor.search(body=query, index=index_name) available_cursor.close() OpenSearchClient.connection_pool.append(available_cursor) OpenSearchClient.connection_in_use.pop(-1) return response ```
2,248
18,808,150
I have two accounts on my system, an admin account and a user account. I use the admin account to install macport and have set the default python using ``` sudo port select --set python python27 ``` On the user account I can run all the python I need using ``` /opt/local/bin/python ``` but how do I select that to be default? ``` port select --list python ``` reports ``` python27 (active) ``` but which python returns ``` /usr/bin/python ```
2013/09/15
[ "https://Stackoverflow.com/questions/18808150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1816807/" ]
This is really a shell question. `which python` returns the first python on your PATH environment variable. The PATH variable is a list of paths that the shell searches for executables. This is usually set in .profile, .bash\_profile or .bashrc. If you reorder your paths, such that `/opt/local/bin` comes before `/usr/bin` then `/opt/local/bin/python` will be your default. This will also be return by `#!/usr/bin/env python`, which is the normal shebang put at the top of python scripts.
You can use `alias python=/opt/local/bin/python` in your .bashrc, or the equivalent rc file for your shell.
2,258
57,903,358
I am attempting to build an image for the jetson-nano using yocto poky-warrior and meta-tegra warrior-l4t-r32.2 layer. I've been following [this thread](https://stackoverflow.com/questions/56481980/yocto-for-nvidia-jetson-fails-because-of-gcc-7-cannot-compute-suffix-of-object/56528785#56528785) because he had the same problem as me, and the answer on that thread fixed it, but then a new problem occoured.Building with ``` bitbake core-image-minimal ``` Stops with an error stating ``` ERROR: Task (…/jetson-nano/layers/poky-warrior/meta/recipes-core/libxcrypt/libxcrypt.bb:do_configure) failed with exit code '1' ``` I've been told that applying the following patch would fix this problem: ``` diff --git a/meta/recipes-core/busybox/busybox.inc b/meta/recipes- core/busybox/busybox.inc index 174ce5a8c0..e8d651a010 100644 --- a/meta/recipes-core/busybox/busybox.inc +++ b/meta/recipes-core/busybox/busybox.inc @@ -128,7 +128,7 @@ do_prepare_config () { ${S}/.config.oe-tmp > ${S}/.config fi sed -i 's/CONFIG_IFUPDOWN_UDHCPC_CMD_OPTIONS="-R -n"/CONFIG_IFUPDOWN_UDHCPC_CMD_OPTIONS="-R -b"/' ${S}/.config - sed -i 's|${DEBUG_PREFIX_MAP}||g' ${S}/.config + #sed -i 's|${DEBUG_PREFIX_MAP}||g' ${S}/.config } # returns all the elements from the src uri that are .cfg files diff --git a/meta/recipes-core/libxcrypt/libxcrypt.bb b/meta/recipes-core/libxcrypt/libxcrypt.bb index 3b9af6d739..350f7807a7 100644 --- a/meta/recipes-core/libxcrypt/libxcrypt.bb +++ b/meta/recipes-core/libxcrypt/libxcrypt.bb @@ -24,7 +24,7 @@ FILES_${PN} = "${libdir}/libcrypt*.so.* ${libdir}/libcrypt-*.so ${libdir}/libowc S = "${WORKDIR}/git" BUILD_CPPFLAGS = "-I${STAGING_INCDIR_NATIVE} -std=gnu99" -TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir} -Wno-error=missing-attributes" -CPPFLAGS_append_class-nativesdk = " -Wno-error=missing-attributes" +TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir} " +CPPFLAGS_append_class-nativesdk = " " BBCLASSEXTEND = "nativesdk" ``` So I've made a libxcrypt.patch file and copy pasted the patch content and put the file in my poky meta layer. But how do I apply the patch? I can't figure out what to do from here, do I need to make an bbappend file or add to one?- if so which one? or do I need to edit a .bb file?- maybe libxcrypt.bb? And do I need to add these lines: ``` FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" SRC_URI += "file://path/to/patch/file" ``` I've been trying to look at similar stackoverflow posts about this but they don't seem to be precise enough for me to work it out as I am completely new to yocto and the likes. So far I've tried to add the lines ``` FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" SRC_URI += "file://path/to/patch/file" ``` to the libxcrypt.bb file but it says it cannot find the file to patch. Then I found out this could potentially be solved with adding ;striplevel=0 to the SRC\_URI line, so I did this: ``` SRC_URI += "file://path/to/patch/file;striplevel=0" ``` Which did nothing. Then I tried to put ``` --- a/meta/recipes-core/busybox/busybox.inc +++ b/meta/recipes-core/busybox/busybox.inc ``` In the top of the patch file, but this also did nothing. This is the full error message without attempting to apply the patch: ``` ERROR: libxcrypt-4.4.2-r0 do_configure: configure failed ERROR: libxcrypt-4.4.2-r0 do_configure: Function failed: do_configure (log file is located at /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/log.do_configure.42560) ERROR: Logfile of failure stored in: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/log.do_configure.42560 Log data follows: | DEBUG: SITE files ['endian-little', 'bit-64', 'arm-common', 'arm-64', 'common-linux', 'common-glibc', 'aarch64-linux', 'common'] | DEBUG: Executing shell function autotools_preconfigure | DEBUG: Shell function autotools_preconfigure finished | DEBUG: Executing python function autotools_aclocals | DEBUG: SITE files ['endian-little', 'bit-64', 'arm-common', 'arm-64', 'common-linux', 'common-glibc', 'aarch64-linux', 'common'] | DEBUG: Python function autotools_aclocals finished | DEBUG: Executing shell function do_configure | automake (GNU automake) 1.16.1 | Copyright (C) 2018 Free Software Foundation, Inc. | License GPLv2+: GNU GPL version 2 or later <https://gnu.org/licenses/gpl-2.0.html> | This is free software: you are free to change and redistribute it. | There is NO WARRANTY, to the extent permitted by law. | | Written by Tom Tromey <tromey@redhat.com> | and Alexandre Duret-Lutz <adl@gnu.org>. | AUTOV is 1.16 | NOTE: Executing ACLOCAL="aclocal --system-acdir=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot/usr/share/aclocal/ --automake-acdir=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal-1.16" autoreconf -Wcross --verbose --install --force --exclude=autopoint -I /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/git/m4/ -I /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal/ | autoreconf: Entering directory `.' | autoreconf: configure.ac: not using Gettext | autoreconf: running: aclocal --system-acdir=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot/usr/share/aclocal/ --automake-acdir=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal-1.16 -I /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/git/m4/ -I /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal/ --force -I m4 | autoreconf: configure.ac: tracing | autoreconf: running: libtoolize --copy --force | libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, 'm4'. | libtoolize: copying file 'm4/ltmain.sh' | libtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'm4'. | libtoolize: copying file 'm4/libtool.m4' | libtoolize: copying file 'm4/ltoptions.m4' | libtoolize: copying file 'm4/ltsugar.m4' | libtoolize: copying file 'm4/ltversion.m4' | libtoolize: copying file 'm4/lt~obsolete.m4' | autoreconf: running: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/bin/autoconf --include=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/git/m4/ --include=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal/ --force | autoreconf: running: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/bin/autoheader --include=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/git/m4/ --include=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal/ --force | autoreconf: running: automake --add-missing --copy --force-missing | configure.ac:31: installing 'm4/compile' | configure.ac:30: installing 'm4/config.guess' | configure.ac:30: installing 'm4/config.sub' | configure.ac:17: installing 'm4/install-sh' | configure.ac:17: installing 'm4/missing' | Makefile.am: installing './INSTALL' | Makefile.am: installing 'm4/depcomp' | parallel-tests: installing 'm4/test-driver' | autoreconf: running: gnu-configize | autoreconf: Leaving directory `.' | NOTE: Running ../git/configure --build=x86_64-linux --host=aarch64-poky-linux --target=aarch64-poky-linux --prefix=/usr --exec_prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --libexecdir=/usr/libexec --datadir=/usr/share --sysconfdir=/etc --sharedstatedir=/com --localstatedir=/var --libdir=/usr/lib --includedir=/usr/include --oldincludedir=/usr/include --infodir=/usr/share/info --mandir=/usr/share/man --disable-silent-rules --disable-dependency-tracking --with-libtool-sysroot=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot --disable-static | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/endian-little | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/arm-common | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/arm-64 | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/common-linux | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/common-glibc | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/common | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/meta-openembedded/meta-networking/site/endian-little | checking for a BSD-compatible install... /home/mci/yocto/dev-jetson-nano/build/tmp/hosttools/install -c | checking whether build environment is sane... yes | checking for aarch64-poky-linux-strip... aarch64-poky-linux-strip | checking for a thread-safe mkdir -p... /home/mci/yocto/dev-jetson-nano/build/tmp/hosttools/mkdir -p | checking for gawk... gawk | checking whether make sets $(MAKE)... yes | checking whether make supports nested variables... yes | checking build system type... x86_64-pc-linux-gnu | checking host system type... aarch64-poky-linux-gnu | checking for aarch64-poky-linux-gcc... aarch64-poky-linux-gcc -march=armv8-a+crc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot | checking whether the C compiler works... no | configure: error: in `/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/build': | configure: error: C compiler cannot create executables | See `config.log' for more details | NOTE: The following config.log files may provide further information. | NOTE: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/build/config.log | ERROR: configure failed | WARNING: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/run.do_configure.42560:1 exit 1 from 'exit 1' | ERROR: Function failed: do_configure (log file is located at /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/log.do_configure.42560) ERROR: Task (/home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/recipes-core/libxcrypt/libxcrypt.bb:do_configure) failed with exit code '1' NOTE: Tasks Summary: Attempted 883 tasks of which 848 didn't need to be rerun and 1 failed. ``` This is the full error log when I try to add the lines to the libxcrypt.bb file to apply the patch: ``` ERROR: libxcrypt-4.4.2-r0 do_patch: Command Error: 'quilt --quiltrc /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/etc/quiltrc push' exited with 0 Output: Applying patch libxcrypt.patch can't find file to patch at input line 7 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- |--- a/meta/recipes-core/busybox/busybox.inc |+++ b/meta/recipes-core/busybox/busybox.inc |diff --git a/meta/recipes-core/busybox/busybox.inc b/meta/recipes-core/busybox/busybox.inc |index 174ce5a8c0..e8d651a010 100644 |--- a/meta/recipes-core/busybox/busybox.inc |+++ b/meta/recipes-core/busybox/busybox.inc -------------------------- No file to patch. Skipping patch. 1 out of 1 hunk ignored can't find file to patch at input line 20 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- |diff --git a/meta/recipes-core/libxcrypt/libxcrypt.bb b/meta/recipes-core/libxcrypt/libxcrypt.bb |index 3b9af6d739..350f7807a7 100644 |--- a/meta/recipes-core/libxcrypt/libxcrypt.bb |+++ b/meta/recipes-core/libxcrypt/libxcrypt.bb -------------------------- No file to patch. Skipping patch. 1 out of 1 hunk ignored Patch libxcrypt.patch does not apply (enforce with -f) ERROR: libxcrypt-4.4.2-r0 do_patch: ERROR: libxcrypt-4.4.2-r0 do_patch: Function failed: patch_do_patch ERROR: Logfile of failure stored in: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/log.do_patch.34179 ERROR: Task (/home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/recipes-core/libxcrypt/libxcrypt.bb:do_patch) failed with exit code '1' NOTE: Tasks Summary: Attempted 811 tasks of which 793 didn't need to be rerun and 1 failed. ``` I know this might be a trivial question for a lot, but as a new developer this is very hard to figure out on my own.
2019/09/12
[ "https://Stackoverflow.com/questions/57903358", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5999131/" ]
When using Callable in dictConfig, the Callable you put into the value of dictConfig has to be a Callable which returns a Callable as discussed in the Python Bug Tracker: * <https://bugs.python.org/issue41906> E.g. ```py def my_filter_wrapper(): # the returned Callable has to accept a single argument (the LogRecord instance passed in this callable) with return value of 1 or 0 return lambda record: 0 if <your_condition_here> else 1 logging_dict = { ... 'filters': { 'ignore_progress': { '()': my_filter_wrapper, } }, ... ``` Or even simpler if your custom filtering logic is a one-liner and independent on the log record instance: ```py logging_dict = { ... 'filters': { 'ignore_progress': { '()': lambda : lambda _: 0 if <your_condition> else 1 } }, ... ``` It took me a long while to figure this out. Hope it helps anyone has the same questions. And there is definitely something needed in its Python implementation to make it more elegant.
I suggest using [loguru](https://github.com/Delgan/loguru) as logging package. you can easily add a handler for your logger.
2,259
31,444,776
I want to create a bunch of simple geometric shapes (colored rectangles, triangles, squares ...) using pygame and then later analyze their relations and features. I first tried [turtle](https://docs.python.org/2/library/turtle.html) but apparently that is only a graphing library and cannot keep track of the shapes it creates and I wonder if the same holds true for Pygame. To illustrate the point, say I have this script: ``` # Import a library of functions called 'pygame' import pygame from math import pi # Initialize the game engine pygame.init() # Define the colors we will use in RGB format BLACK = ( 0, 0, 0) WHITE = (255, 255, 255) BLUE = ( 0, 0, 255) GREEN = ( 0, 255, 0) RED = (255, 0, 0) # Set the height and width of the screen size = [800, 600] screen = pygame.display.set_mode(size) pygame.display.set_caption("Example code for the draw module") #Loop until the user clicks the close button. done = False clock = pygame.time.Clock() while not done: # This limits the while loop to a max of 10 times per second. # Leave this out and we will use all CPU we can. clock.tick(10) for event in pygame.event.get(): # User did something if event.type == pygame.QUIT: # If user clicked close done=True # Flag that we are done so we exit this loop screen.fill(WHITE) # Draw a rectangle outline pygame.draw.rect(screen, BLACK, [75, 10, 50, 20], 2) # Draw a solid rectangle pygame.draw.rect(screen, BLACK, [150, 10, 50, 20]) # Draw an ellipse outline, using a rectangle as the outside boundaries pygame.draw.ellipse(screen, RED, [225, 10, 50, 20], 2) # Draw a circle pygame.draw.circle(screen, BLUE, [60, 250], 40) # Go ahead and update the screen with what we've drawn. # This MUST happen after all the other drawing commands. pygame.display.flip() # Be IDLE friendly pygame.quit() ``` It creates this image: ![enter image description here](https://i.stack.imgur.com/8oisH.jpg) Now, suppose I save the image created by Pygame. Is there a way Pygame would be able to detect the shapes, colors and coordinates from the image?
2015/07/16
[ "https://Stackoverflow.com/questions/31444776", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4321788/" ]
PyGame is a gaming library - it helps with making graphics and audio and controllers for games. It doesn't have support to detect objects in a preexisting image. What you want is OpenCV (It has Python bindings) - this is made to "understand" things about an image. One popular math algorithm used to detect shapes (or edges) of any sort are Hough Transforms. You can read more about it here - <http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html> OpenCV has Hough transform functions inside it which are very useful. --- You could attempt to make your own Hough transform code and use it ... but libraries make it easier.
Yes, It can, but pygame is also good for making games but unfortunately, you can't convert them to IOS or Android, in the past, there was a program called PGS4A which allowed you to convert pygame projects to android but sadly, the program has been discontinued and now, there is no way. On this case, my sggestion would be that if you ever wanted to do this, download Android Studio from this link "<http://developer.android.com/sdk/index.html#top>" and google on how to use libgdx with Android Studio, this guy has an extremely helpful tutorial which has a lot of parts, but if your goal is to make commercial applications, I would highly recommend you to check this tutorial "<https://www.youtube.com/watch?v=_pwJv1QRSPM>" extremely helpful. Good luck with your goals and hoped this helped you on making your decision, but python is a good programming language, it will give you the basic idea on how programming is.
2,261
51,772,333
I am new to python and would love to know this. Suppose I want to scrape stock price data from a website to excel. Now the data keeps refreshing every second, how do I refresh the data on my excel sheet automatically using python. I have read about win32 but couldn’t understand it’s use much. Any help would be dearly appreciated.
2018/08/09
[ "https://Stackoverflow.com/questions/51772333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10041192/" ]
As stated in the documentation: > > Help on built-in function readlines: > > > readlines(hint=-1, /) method of \_io.TextIOWrapper instance > Return a list of lines from the stream. > > > > ``` > hint can be specified to control the number of lines read: no more > lines will be read if the total size (in bytes/characters) of all > lines so far exceeds hint. > > ``` > > Once you have consumed all lines, the next call to `readlines` will be empty. Change your function to store the result in a temporary variable: ``` with open(os.path.join(root, file)) as fRead: lines = fRead.readlines() line_3 = lines[3] line_4 = lines[4] print line_3 print line_4 ```
The method `readlines()` reads all lines in a file until it hits the EOF (end of file). The "cursor" is then at the end of the file and a subsequent call to `readlines()` will not yield anything, because EOF is directly found. Hence, after `line_3 = fRead.readlines()[3]` you have consumed the whole file but only stored the fourth (!) line of the file (if you start to count the lines at 1). If you do ``` all_lines = fRead.readlines() line_3 = all_lines[3] line_4 = all_lines[4] ``` you have read the file only once and saved every information you needed.
2,262
34,124,259
I'm new here and fairly new to python and I have a question. I had a similar question during my midterm a while back and it has bugged me that I cannot seem to figure it out. The overall idea was that I had to find the longest string in a nested list. So I came up with my own example to try and figure it out but for some reason I just can't. So I was hoping someone could tell me what I did wrong and how I can go about the problem without using the function max but instead with a for loop. This is my own example with my code: ``` typing_test = ['The', ['quick', 'brown'], ['fox', ['jumped'], 'over'], 'the', 'lazy', 'dog'] def longest_string (nested_list: 'nested list of strings') -> int: '''return the longest string within the nested list''' maximum_length = 0 for word in nested_list: try: if type(word) == str: maximum_length >= len(word) maximum_length = len(word) else: (longest_string((word))) except: print('Error') return maximum_length ``` My code returns 3 but the highest should be 6 because of the length of jumped I'm not sure if it's going through each list and checking each strings length. In short I don't think it is replacing/updating the longest string. So if someone can tell me what I'm doing wrong or how to fix my example I would greatly appreciate it. And thank you very much in advance.
2015/12/06
[ "https://Stackoverflow.com/questions/34124259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5647743/" ]
As Simon mentioned, you should be using `FindAllString` to find all matches. Also, you need to remove the ^ from the beginning of the RE (^ anchors the pattern to the beginning of the string). You should also move the regexp.Compile outside the loop for efficiency.
<https://play.golang.org/p/Q_yfub0k80> As mentioned here, `FindAllString` returns a slice of all successive matches of the regular expression. But, `FindString` returns the leftmost match.
2,263
49,147,937
I am trying to get specific coordinates in an image. I have marked a red dot in the image at several locations to specify the coordinates I want to get. In GIMP I used the purist red I could find (HTML notation **ff000**). The idea was that I would iterate through the image until I found a pure shade of red and then print out the coordinates. I am using python and opencv to do so but I can't find any good tutorials (best I could find is [this](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_core/py_basic_ops/py_basic_ops.html) but it's not very clear...at least for me). Here is an example of the image I am dealing with.[![enter image description here](https://i.stack.imgur.com/OXmDn.png)](https://i.stack.imgur.com/OXmDn.png) I just want to know how to find the coordinates of the pixels with the red dots. EDIT (added code): ``` import cv2 import numpy as np img = cv2.imread('image.jpg') width, height = img.shape[:2] for i in range(0,width): for j in range(0,height): px = img[i,j] ``` I don't know what to do from here. I have tried code such as `if px == [x,y,z]` looking for color detection but that doesn't work.
2018/03/07
[ "https://Stackoverflow.com/questions/49147937", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4902160/" ]
You can do it with cv2 this way: ``` image = cv2.imread('image.jpg') lower_red = np.array([0,0,220]) # BGR-code of your lowest red upper_red = np.array([10,10,255]) # BGR-code of your highest red mask = cv2.inRange(image, lower_red, upper_red) #get all non zero values coord=cv2.findNonZero(mask) ```
You can do this with PIL and numpy. I'm sure there is a similar implementation with cv2. ``` from PIL import Image import numpy as np img = Image.open('image.png') width, height = img.size[:2] px = np.array(img) for i in range(height): for j in range(width): if(px[i,j,0] == 255 & px[i,j,1] == 0 & px[i,j,2] == 0): print(i,j,px[i,j]) ``` This doesn't work with the image you provided, since there aren't any pixels that are exactly (255,0,0). Something may have changed when it got compressed to a .jpg, or you didn't make them as red as you thought you did. Perhaps you could try turning off anti-aliasing in GIMP.
2,264
57,462,530
I need to have a python GUI communicating with an mbed (LPC1768) board. I am able to send a string from the mbed board to python's IDLE but when I try to send a value back to the mbed board, it does not work as expected. I have written a very basic program where I read a string from the mbed board and print it on Python's IDLE. The program should then ask for the user's to type a value which should be sent to the mbed board. This value should set the time between LED's flashing. The python code ``` import serial ser = serial.Serial('COM8', 9600) try: ser.open() except: print("Port already open") out= ser.readline() #while(1): print(out) time=input("Enter a time: " ) print (time) ser.write(time.encode()) ser.close() ``` and the mbed c++ code ``` #include "mbed.h" //DigitalOut myled(LED1); DigitalOut one(LED1); DigitalOut two(LED2); DigitalOut three(LED3); DigitalOut four(LED4); Serial pc(USBTX, USBRX); float c = 0.2; int main() { while(1) { pc.printf("Hello World!\n"); one = 1; wait(c); two=1; one = 0; wait(c); two=0; c = float(pc.getc()); three=1; wait(c); three=0; four=1; wait(c); four=0; } } ``` The program waits for the value to be entered in IDLE and sent to the mbed board and begins to use the value sent to it but suddenly stops working and I cannot figure out why.
2019/08/12
[ "https://Stackoverflow.com/questions/57462530", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11671221/" ]
If using index labels between 2 and 4 use `loc`: ``` df.loc[2:4, 'number'].max() ``` Output: ``` 10 ``` If using index integer positions 2nd through the 4th labels, then use `iloc`: ``` df.iloc[2:5, df.columns.get_loc('number')].max() ``` *Note: you must use `get_loc` to get the integer position of the column 'number'* Output: ``` 10 ```
Even can be used: ``` >>> df.iloc[2:4,:].loc[:,'number'].max() 10 ```
2,265
51,062,920
i'm tryng to import **mysqlclient** library for python with **pip**, when i use the command `pip install mysqlclient` it return an error: ``` Collecting mysqlclient Using cached https://files.pythonhosted.org/packages/ec/fd/83329b9d3e14f7344d1cb31f128e6dbba70c5975c9e57896815dbb1988ad/mysqlclient-1.3.13.tar.gz Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error Complete output from command c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install-40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip-record-va173t5v\install-record.txt --single-version-externally-managed --compile: c:\users\astrina\appdata\local\programs\python\python36\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) running install running build running build_py creating build creating build\lib.win-amd64-3.6 copying _mysql_exceptions.py -> build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\__init__.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\compat.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\connections.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\converters.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\cursors.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\release.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\times.py -> build\lib.win-amd64-3.6\MySQLdb creating build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.6\MySQLdb\constants running build_ext building '_mysql' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command "c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install- 40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip- record-va173t5v\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\astrina\AppData\Local\Temp\pip- install-40l_x_f4\mysqlclient\ ``` I've already installed **Microsoft Build Tools 2015** but the problem persist
2018/06/27
[ "https://Stackoverflow.com/questions/51062920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9595624/" ]
First install python 3.6.5, then run ``` pip install mysqlclient==1.3.12 ```
For me, it was a mixture of an old setup tools and missing packages ``` pip install --upgrade setuptools apt install gcc libssl-dev ```
2,266
60,520,272
I'm new to python and I've looked up a little bit of info and i cant't find the problem with my code, please help. Code: ``` array = [] print ('Enter values in array: ') for i in range(0,5): n = input("value: ") array.append(n) a = input("Enter search term: ") for i in range(len(array)): found = False while found == False : if a == array(i): found = True position = i else : found = False print("Your search term is in position " + position) ``` Error: at `if a == array(i)` line it says > > list object is not callable > > >
2020/03/04
[ "https://Stackoverflow.com/questions/60520272", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13003923/" ]
You don't? > > Should prettier not be installed locally with your project's dependencies or globally on the machine, the version of prettier that is bundled with the extension will be used. > > > <https://github.com/prettier/prettier-vscode#prettier-resolution>
It seems like you want Prettier to be your code formatter in VS Code for all of your projects. In VS Code navigate to: > > File > Preferences > Settings > > > Search for "Default formatter" And then select "esbenp.prettier-vscode"
2,276
6,361,775
I know there are quite a few solutions for this problem but mine was peculiar in the sense that, I might get truncated utf16 data and yet have to make the best effort of dealing with conversions where decode and encode will fail with UnicodeDecodeError. So came up with the following code in python. Please let me know your comments on how I can improve them for faster processing. ``` try: # conversion to ascii if utf16 data is formatted correctly input = open(filename).read().decode('UTF16') asciiStr = input.encode('ASCII', 'ignore') open(filename).close() return asciiStr except: # if fail with UnicodeDecodeError, then use brute force # to decode truncated data try: unicode = open(filename).read() if (ord(unicode[0]) == 255 and ord(unicode[1]) == 254): print("Little-Endian format, UTF-16") leAscii = "".join([(unicode[i]) for i in range(2, len(unicode), 2) if 0 < ord(unicode[i]) < 127]) open(filename).close() return leAscii elif (ord(unicode[0]) == 254 and ord(unicode[1]) == 255): print("Big-Endian format, UTF-16") beAscii = "".join([(unicode[i]) for i in range(3, len(unicode), 2) if 0 < ord(unicode[i]) < 127]) open(filename).close() return beAscii else: open(filename).close() return None except: open(filename).close() print("Error in converting to ASCII") return None ```
2011/06/15
[ "https://Stackoverflow.com/questions/6361775", "https://Stackoverflow.com", "https://Stackoverflow.com/users/684799/" ]
What about: ``` data = open(filename).read() try: data = data.decode("utf-16") except UnicodeDecodeError: data = data[:-1].decode("utf-16") ``` I.e. if it's truncated mid-way through a code unit, snip the last byte off, and do it again. That should get you back to a valid UTF-16 string, without having to try to implement a decoder yourself.
This just jumped out at me as a "best practice" improvement. File accesses should really be wrapped in `with` blocks. This will handle opening and cleaning up for you.
2,277
52,372,489
I am wanting to get the average brightness of a file in python. Having read a previous question [[Problem getting terminal output from ImageMagick's compare.exe ( Either by pipe or Python )](https://stackoverflow.com/questions/5145508/problem-getting-terminal-output-from-imagemagicks-compare-exe-either-by-pipe]) I have come up with : ``` cmd='/usr/bin/convert {} -format "%[fx:100*image.mean]\n" info: > bright.txt'.format(full) subprocess.call(cmd,shell=True) with open('bright.txt', 'r') as myfile: x=myfile.read().replace('\n', '') return x ``` the previous question recommended the use of 'pythonmagick' which I can find but with no current documentation and very little recent activity. I could not work out the syntax to use it. I know that my code is unsatisfactory but it does work. Is there a better way which does not need 'shell=true' or additional file processing ?
2018/09/17
[ "https://Stackoverflow.com/questions/52372489", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7869335/" ]
This seems to works for me to return the mean as a variable that can be printed. **(This is a bit erroneous. See the correction near the bottom)** ``` #!/opt/local/bin/python3.6 import subprocess cmd = '/usr/local/bin/convert lena.jpg -format "%[fx:100*mean]" info:' mean=subprocess.call(cmd, shell=True) print (mean) ``` The result is 70.67860, which is returned to the terminal. This also works with shell=False, if you parse each part of the command. ``` #!/opt/local/bin/python3.6 import subprocess cmd = ['/usr/local/bin/convert','lena.jpg','-format','%[fx:100*mean]','info:'] mean=subprocess.call(cmd, shell=False) print (mean) ``` The result is 70.67860, which is returned to the terminal. **The comment from `tripleee` below indicates that my process above is not correct in that the mean is being shown at the terminal, but not actually put into the variable.** He suggested to use `subprocess.check_output()`. The following is his solution. (Thank you, tripleee) ``` #!/opt/local/bin/python3.6 import subprocess filename = 'lena.jpg' mean=subprocess.check_output( ['/usr/local/bin/convert', filename, '-format', 'mean=%[fx:100*mean]', 'info:'], universal_newlines=True) print (mean) ``` Prints: `mean=70.6786`
You can probably improve the subprocess, and eliminate the temporary text file with `Popen` + `PIPE`. ```py cmd=['/usr/bin/convert', full, '-format', '%[fx:100*image.mean]', 'info:'] pid = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = pid.communicate() return float(out) ``` ImageMagick also ships with the `identify` utility. The same method could be achieved with... ```py cmd=['/usr/bin/identify', '-format', '%[fx:100*image.mean]', full] ``` It might be worth exploring if it's worth working directly with ImageMagick's shared libraries. Usually connected through C-API (pythonmagick, wand, &etc). For what your doing, this would only increase code-complexity, increase module dependancies, but in no way improve performance or accuracy.
2,280
41,861,138
I am trying to loop through subreddits, but want to ignore the sticky posts at the top. I am able to print the first 5 posts, unfortunately including the stickies. Various pythonic methods of trying to skip these have failed. Two different examples of my code below. ``` subreddit = reddit.subreddit(sub) for submission in subreddit.hot(limit=5): # If we haven't replied to this post before if submission.id not in posts_replied_to: ##FOOD if subreddit == 'food': if 'pLEASE SEE' in submission.title: pass if "please vote" in submission.title: pass else: print(submission.title) if re.search("please vote", submission.title, re.IGNORECASE): pass else: print(submission.title) ``` I noticed a sticky tag in the documents but not sure exactly how to use it. Any help is appreciated.
2017/01/25
[ "https://Stackoverflow.com/questions/41861138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4750577/" ]
[It looks like you can get the id of a stickied post based on docs](http://praw.readthedocs.io/en/latest/code_overview/models/subreddit.html?highlight=sticky). So perhaps you could get the id(s) of the stickied post(s) (note that with the 'number' parameter of the sticky method you can say give me the first, or second, or third, stickied post; use this to your advantage to get *all* of the stickied posts) and for each submission that you are going to pull, first check its id against the stickied ids. Example: ``` # assuming there are no more than three stickies... stickies = [reddit.subreddit("chicago").sticky(i).id for i in range(1,4)] ``` and then when you want to make sure a given post isn't stickied, use: ``` if post.id not in stickies: do something ``` It looks like, were there fewer than three, this would give you a list with duplicate ids, which won't be a problem.
As an addendum to @Al Avery's answer, you can do a complete search for the IDs of all stickies on a given subreddit by doing something like ``` def get_all_stickies(sub): stickies = set() for i in itertools.count(1): try: sid = sub.sticky(i) except pawcore.NotFound: break if sid in stickies: break stickies.add(sid) return stickies ``` This function takes into account that the documentation lead one to expect an error if an invalid index is supplied to `stick`, while the actual behavior seems to be that a duplicate ID is returned. Using a `set` instead of a list makes lookup faster if you have a large number of stickies. You would use the function as ``` subreddit = reddit.subreddit(sub) stickies = get_all_stickies(subreddit) for submission in subreddit.hot(limit=5): if submission.id not in posts_replied_to and submission.id not in stickies: print(submission.title) ```
2,281
32,221,890
I want a user to input a list with object in every new line. The user will copy and past a whole list to the program and not enter a new object every time. For example, here is the users input: > > january > > february > > march > > april > > may > > june > > > and he gets a list just like this: ``` ('january','february','march','april','may','june') ``` Someone has an idea for a python code that can help me?
2015/08/26
[ "https://Stackoverflow.com/questions/32221890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4982967/" ]
You should use <http://eonasdan.github.io/bootstrap-datetimepicker/> datetimePicker, by setting the format of the `dateTimePicker` to `'hh:mm:ss'` You have to use - `moment.js` - For more formats, you should check: <http://momentjs.com/docs/#/displaying/format/> I have created a JSFiddle. <http://jsfiddle.net/jagtx65n/> HTML: ``` <div class="col-sm-6"> <div class="form-group"> <div class="input-group date" id="datetimepicker1"> <input type="text" class="form-control"> <span class="input-group-addon"> <span class="glyphicon glyphicon-calendar"></span> </span> </div> </div> </div> ``` JS: ``` $(function () { $('#datetimepicker1').datetimepicker({ format: 'hh:mm:ss' }); }); ``` **EDIT** open when click the input field: ``` $(function(){ $('#datetimepicker1').datetimepicker({ format: 'hh:mm:ss', allowInputToggle: true }); }); ```
[DEMO](http://jsfiddle.net/SantoshPandu/B4BzK/466/) HTML ``` <div class="container"> <div class="row"> <div class="col-sm-6 form-group"> <label for="dd" class="sr-only">Time Pick</label> <input type="text" id="dd" name="dd" data-format="MM/DD/YYYY" placeholder="date" class="form-control" /> </div> </div> </div> </div> <input type='button' id='clear' Value='Clear Date'> ``` JS ``` var Date = $('#dd').datetimepicker({ format: 'DD-MM-YYYY hh:mm:ss', }) $('#clear').click(function () { $('#dd').data("DateTimePicker").clear() }) ```
2,284
53,451,057
I would like to display the following ``` $ env/bin/python >>>import requests >>> requests.get('http://dabapps.com') <Response [200]> ``` as a code sample within a bullet paragraph for Github styled markdown. How do I do it?
2018/11/23
[ "https://Stackoverflow.com/questions/53451057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5722359/" ]
> > h:25:59: friend declaration delares a non template function. > > > You are missing to declare the function as a template that takes `Pairwise<K, V>`: header.h: ``` #ifndef HEADER_H_INCLUDED /* or pragma once */ #define HEADER_H_INCLUDED /* if you like it */ #include <iostream> // or <ostream> template<typename K, typename V> class Pairwise { // made it a class so that the K first; // friend actually makes sense. V second; public: Pairwise() = default; Pairwise(K first, V second) : first{ first }, second{ second } {} template<typename K, typename V> friend std::ostream& operator<<(std::ostream &out, Pairwise<K, V> const &p) { return out << p.first << ": " << p.second; } }; #endif /* HEADER_H_INCLUDED */ ``` source file: ``` #include <iostream> // the user can't know a random header includes it #include <string> #include "header.h" int main() { Pairwise<std::string, std::string> p{ "foo", "bar" }; std::cout << p << '\n'; } ``` Sidenote: You could also use ``` { using Stringpair = Pairwise<std::string, std::string>; // ... Stringpair sp{ "foo", "bar" }; } ``` if you need that more often. The other errors you got result from confusing `std::ostringstream` with `std::ostream` in `operator<<()`.
As you write it, you define the operator as a member function, which is very likely not intended. Divide it like ... ``` template<typename K, typename V> struct Pairwise{ K first; V second; Pairwise() = default; Pairwise(K, V); //print out as a string in main friend ostream& operator<<(ostream &out, const Pairwise &n); }; template<typename K, typename V> ostream& operator<<(ostream &out, const Pairwise<K,V> &n) { ... return out; } ``` And it should work. BTW: Note that in a `struct` all members are public by default; so you would be able to access them even in absence of the `friend`-declaration.
2,287
43,513,121
As per my application requirement, I need to get the server IP and the server name from the python program. But my application is resides inside the specific docker container on top of the Ubuntu. I have tried like the below ``` import os os.system("hostname") # to get the hostname os.system("hostname -i") # to get the host ip ``` Output: `2496c9ab2f4a172.*.*.*` But it is giving the host name as a it's residing docker containerid and the host\_ip as it's private ip address as above. I need the hostname as it is the server name. But when I type these above commands in the terminal I am able to get result what I want.
2017/04/20
[ "https://Stackoverflow.com/questions/43513121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3666266/" ]
You won't be able to get the host system's name this way. To get it, you can either define an environment variable, either in your Dockerfile, or when running your container (-e option). Alternatively, you can mount your host `/etc/hostname` file into the container, or copy it... This is an example run command I use to set the environment variable HOSTNAME to the host's hostname within the container: ``` docker run -it -e "HOSTNAME=$(cat /etc/hostname)" <image> <cmd> ``` In python you can then run `os.environ["HOSTNAME"]` to get the hostname. As far as the IP address goes, I use this command to get retrieve it from a running container: ``` route -n | awk '/UG[ \t]/{print $2}' ``` You will have to install route to be able to use this command. It is included in the package net-tools. `apt-get install net-tools`
An alternative might be the following: ENV: ``` NODENAME: '{{.Node.Hostname}}' ``` This will get you the Hostname of the Node, where the container is running as an environment variable (tested on Docker-Swarm / CoreOs Stable).
2,290
7,052,874
I had a custom script programmed and it is using the authors own module that is hosted on Google code in a Mercurial repo. I understand how to clone the repo but this will just stick the source into a folder on my computer. Is there a proper way to add the module into my python install to make it available for my projects? (e.g. with modules hosted on pypi you can use virtualenv and pip to install). Thanks Dave O
2011/08/13
[ "https://Stackoverflow.com/questions/7052874", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893341/" ]
In exactly the same way. Just pass the address of the repo to `pip install`, using the `-e` parameter: ``` pip install -e hg+http://code.google.com/path/to/repo ```
If the module isn't on pypi, clone the repository with Hg and see if there's a setup.py file. If there is, open a command prompt, cd to that directory, and run: ``` python setup.py install ```
2,293
48,601,123
Here I have a mistake that I can't find the solution. Please excuse me for the quality of the code, I didn't start classes until 6 months ago. I've tried to detach category objects with expunge but once it's added it doesn't work.I was thinking when detaching the object with expunge it will work. and I can't find a solution :( . I pasted as much code as I could so you could see ``` Traceback (most recent call last): File "/home/scwall/PycharmProjects/purebeurre/recovery.py", line 171, in <module> connection.connect.add(article) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 1776, in add self._save_or_update_state(state) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 1796, in _save_or_update_state self._save_or_update_impl(st_) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 2101, in _save_or_update_impl self._update_impl(state) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 2090, in _update_impl self.identity_map.add(state) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/identity.py", line 149, in add orm_util.state_str(state), state.key)) sqlalchemy.exc.InvalidRequestError: Can't attach instance <Categories at 0x7fe8d8000e48>; another instance with key (<class 'packages.databases.models.Categories'>, (26,), None) is already present in this session. Process finished with exit code 1 class CategoriesQuery(ConnectionQuery): @classmethod def get_categories_by_tags(cls, tags_list): return cls.connection.connect.query(Categories).filter(Categories.id_category.in_(tags_list)).all() ``` --- other file: ``` def function_recovery_and_push(link_page): count_and_end_page_return_all = {} count_f = 0 total_count_f = 0 list_article = [] try: products_dic = requests.get(link_page).json() if products_dic['count']: count_f = products_dic['page_size'] if products_dic['count']: total_count_f = products_dic['count'] if not products_dic['products']: count_and_end_page_return_all['count'] = False count_and_end_page_return_all['total_count'] = False count_and_end_page_return_all['final_page'] = True else: count_and_end_page_return_all['final_page'] = False for product in products_dic["products"]: if 'nutrition_grades' in product.keys() \ and 'product_name_fr' in product.keys() \ and 'categories_tags' in product.keys() \ and 1 <= len(product['product_name_fr']) <= 100: try: list_article.append( Products(name=product['product_name_fr'], description=product['ingredients_text_fr'], nutrition_grade=product['nutrition_grades'], shop=product['stores'], link_http=product['url'], categories=CategoriesQuery.get_categories_by_tags(product['categories_tags']))) except KeyError: continue count_and_end_page_return_all['count'] = count_f count_and_end_page_return_all['total_count'] = total_count_f list_article.append(count_and_end_page_return_all) return list_article except: count_and_end_page_return_all['count'] = False count_and_end_page_return_all['total_count'] = False count_and_end_page_return_all['final_page'] = True list_article.append(count_and_end_page_return_all) return list_article p = Pool() articles_list_all_pool = p.map(function_recovery_and_push, list_page_for_pool) p.close() for articles_list_pool in articles_list_all_pool: for article in articles_list_pool: if type(article) is dict: if article['count'] != False and article['total_count'] != False: count += article['count'] total_count = article['total_count'] if article['final_page'] is True: final_page = article['final_page'] else: connection.connect.add(article) ``` I receive this as an error message, thank you in advance for your answers
2018/02/03
[ "https://Stackoverflow.com/questions/48601123", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8551016/" ]
This error happens when you try to add an object to a session but it is already loaded. The only line that I see you use .add function is at the end where you run: `connection.connect.add(article)` So my guess is that this Model is already loaded in the session and you don't need to add it again. You can add a try, except and rollback the operation if it throws an exception.
unloading all objects from session and then adding it again in session might help. ```py db.session.expunge_all() db.session.add() ```
2,294
10,732,812
I'm trying to read some numbers from a text file and convert them to a list of floats, but nothing I try seems to work right. Here's my code right now: ``` python_data = open('C:\Documents and Settings\redacted\Desktop\python_lengths.txt','r') python_lengths = [] for line in python_data: python_lengths.append(line.split()) python_lengths.sort() print python_lengths ``` It returns: ``` [['12.2'], ['26'], ['34.2'], ['5.0'], ['62'], ['62'], ['62.6']] ``` (all brackets included) But I can't convert it to a list of floats with any regular commands like: ``` python_lengths = float(python_lengths) ``` or: ``` float_lengths = [map(float, x) for x in python_lengths] ``` because it seems to be nested or something?
2012/05/24
[ "https://Stackoverflow.com/questions/10732812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1367212/" ]
That is happening because `.split()` always returns a list of items even if there was just 1 element present. If you change your `python_lengths.append(line.split())` to `python_lengths.extend(line.split())` you will get your flat list you expected.
@eumiro's answer is correct, but here is something else that can help: ``` numbers = [] with open('C:\Documents and Settings\redacted\Desktop\python_lengths.txt','r') as f: for line in f.readlines(): numbers.extend(line.split()) numbers.sort() print numbers ```
2,296
41,528,941
I'm new to python and html. I am trying to retrieve the number of comments from a page using requests and BeautifulSoup. In this example I am trying to get the number 226. Here is the code as I can see it when I inspect the page in Chrome: ``` <a title="Go to the comments page" class="article__comments-counts" href="http://www.theglobeandmail.com/opinion/will-kevin-oleary-be-stopped/article33519766/comments/"> <span class="civil-comment-count" data-site-id="globeandmail" data-id="33519766" data-language="en"> 226 </span> Comments </a> ``` When I request the text from the URL, I can find the code but there is no content between the span tags, no 226. Here is my code: ``` import requests, bs4 url = 'http://www.theglobeandmail.com/opinion/will-kevin-oleary-be-stopped/article33519766/' r = requests.get() soup = bs4.BeautifulSoup(r.text, 'html.parser') span = soup.find('span', class_='civil-comment-count') ``` It returns this, same as the above but no 226. ``` <span class="civil-comment-count" data-id="33519766" data-language="en" data-site-id="globeandmail"> </span> ``` I'm at a loss as to why the value isn't appearing. Thank you in advance for any assistance.
2017/01/08
[ "https://Stackoverflow.com/questions/41528941", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7389440/" ]
The page, and specifically the number of comments, does involve JavaScript to be loaded and shown. But, *you don't have to use Selenium*, make a request to the API behind it: ``` import requests with requests.Session() as session: session.headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36"} # visit main page base_url = 'http://www.theglobeandmail.com/opinion/will-kevin-oleary-be-stopped/article33519766/' session.get(base_url) # get the comments count url = "https://api-civilcomments.global.ssl.fastly.net/api/v1/topics/multiple_comments_count.json" params = {"publication_slug": "globeandmail", "reference_language": "en", "reference_ids": "33519766"} r = session.get(url, params=params) print(r.json()) ``` Prints: ``` {'comment_counts': {'33519766': 226}} ```
This page use JavaScript to get the comment number, this is what the page look like when disable the JavaScript: [![enter image description here](https://i.stack.imgur.com/V8mcE.png)](https://i.stack.imgur.com/V8mcE.png) You can find the real url which contains the number in Chrome's Developer tools: [![enter image description here](https://i.stack.imgur.com/FqwR5.png)](https://i.stack.imgur.com/FqwR5.png) Than you can mimic the requests using @alecxe code.
2,298
26,575,303
Hello people I hope you an help me out with this problem: I am currently implementing an interpreter for a scripting language. The language needs a native call interface to C functions, like java has JNI. My problem is, that i want to call the original C functions without writing a wrapper function, which converts the call stack of my scripting language into the C call stack. This means, that I need a way, to generate argument lists of C functions at runtime. Example: ``` void a(int a, int b) { printf("function a called %d", a + b); } void b(double a, int b, double c) { printf("function b called %f", a * b + c); } interpreter.registerNativeFunction("a", a); interpreter.registerNativeFunction("b", b); ``` The interpreter should be able to call the functions, with only knowing the function prototypes of my scripting language: `native void a(int a, int b);` and `native void b(double a, int b, double c);` Is there any way to generate a C function call stack in C++, or do I have to use assembler for this task. Assembler is a problem, because the interpreter should run on almost any platform. Edit: The solution is to use libffi, a library, which handles the call stack creation for many different platforms and operating systems. libffi is also used by some prominent language implementations like cpython and openjdk. Edit: @MatsPetersson Somewhere in my code I have a method like: ``` void CInterpreter::CallNativeFunction(string name, vector<IValue> arguments, IReturnReference ret) { // Call here correct native C function. // this.nativeFunctions is a map which contains the function pointers. } ``` **Edit: Thanks for all your help! I will stay with libffi, and test it on all required platforms.**
2014/10/26
[ "https://Stackoverflow.com/questions/26575303", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4180673/" ]
Yes we can. No FFI library needed, no restriction to C calls, only pure C++11. ``` #include <iostream> #include <list> #include <iostream> #include <boost/any.hpp> template <typename T> auto fetch_back(T& t) -> typename std::remove_reference<decltype(t.back())>::type { typename std::remove_reference<decltype(t.back())>::type ret = t.back(); t.pop_back(); return ret; } template <typename X> struct any_ref_cast { X do_cast(boost::any y) { return boost::any_cast<X>(y); } }; template <typename X> struct any_ref_cast<X&> { X& do_cast(boost::any y) { std::reference_wrapper<X> ref = boost::any_cast<std::reference_wrapper<X>>(y); return ref.get(); } }; template <typename X> struct any_ref_cast<const X&> { const X& do_cast(boost::any y) { std::reference_wrapper<const X> ref = boost::any_cast<std::reference_wrapper<const X>>(y); return ref.get(); } }; template <typename Ret, typename...Arg> Ret call (Ret (*func)(Arg...), std::list<boost::any> args) { if (sizeof...(Arg) != args.size()) throw "Argument number mismatch!"; return func(any_ref_cast<Arg>().do_cast(fetch_back(args))...); } int foo(int x, double y, const std::string& z, std::string& w) { std::cout << "foo called : " << x << " " << y << " " << z << " " << w << std::endl; return 42; } ``` Test drive: ``` int main () { std::list<boost::any> args; args.push_back(1); args.push_back(4.56); const std::string yyy("abc"); std::string zzz("123"); args.push_back(std::cref(yyy)); args.push_back(std::ref(zzz)); call(foo, args); } ``` Exercise for the reader: implement `registerNativeFunction` in three easy steps. 1. Create an abstract base class with a pure `call` method that accepts a list of `boost::any`, call it `AbstractFunction` 2. Create a variadic class template that inherits `AbstractFunction` and adds a pointer to a concrete-type function (or `std::function`). Implement `call` in terms of that function. 3. Create an `map<string, AbstractFunction*>` (use smart pointers actually). Drawback: totally cannot call variadic C-style functions (e.g. printf and friends) with this method. There is also no support for implicit argument conversions. If you pass an `int` to a function that requires a `double`, it will throw an exception (which is slightly better than a core dump you can get with a dynamic solution). It is possible to partially solve this for a finite fixed set of conversions by specializing `any_ref_cast`.
The way to do this is to use pointers to functions: ``` void (*native)(int a, int b) ; ``` The problem you will face is finding the address of the function to store in the pointer is system dependent. On Windoze, you will probably be loading a DLL, finding the address of the function by name within the DLL, then store that point in native to call the function.
2,299
36,655,197
i have problem running Django server in Intellij / Pycharm (I tried in both). There is that red cross: [![enter image description here](https://i.stack.imgur.com/ssyv5.jpg)](https://i.stack.imgur.com/ssyv5.jpg) And this is the error i get: [![Error running Django: Please select Django module](https://i.stack.imgur.com/q389E.jpg)](https://i.stack.imgur.com/q389E.jpg) I have Python 2.7.10 and Django (via pip) installed on my computer. I've tried reinstalling both python and Django, but it didn't help. I've specified project sdk (Python). **Edit:** This is what it looks like in "Project Interpreter" page. [![enter image description here](https://i.stack.imgur.com/XSaW7.png)](https://i.stack.imgur.com/XSaW7.png) and Django configuration: [![enter image description here](https://i.stack.imgur.com/05VCH.png)](https://i.stack.imgur.com/05VCH.png)
2016/04/15
[ "https://Stackoverflow.com/questions/36655197", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3671716/" ]
If your IntelliJ is up to date, there is another solution. I had the exact same problem in **IntelliJ 2017.2** and it was driving me crazy until I read this [post from a IntelliJ maintainer](https://intellij-support.jetbrains.com/hc/en-us/community/posts/206936385-Intellij-Doesn-t-Recognize-Django-project). If you use IntelliJ Idea and "Load an existing project", it will model it as a Java project with a Python modules attached. You cannot get Django loaded, no matter what you do. I handled this by purging the `.idea` directory, and **created a new Django project**, with the pre-existing Django directory as the base directory in IntelliJ. I can now see Django in the project structure > project settings > module part of Intellij, and I can select the django settings file. Step by step in pictures ------------------------ 1. Delete `.idea` folder 2. Create new project [![Create new project](https://i.stack.imgur.com/KDONQ.png)](https://i.stack.imgur.com/KDONQ.png) 3. Select Python > Django [![Select Python > Django](https://i.stack.imgur.com/Uat8o.png)](https://i.stack.imgur.com/Uat8o.png) 4. Hit next [![Hit next](https://i.stack.imgur.com/FKB8L.png)](https://i.stack.imgur.com/FKB8L.png) 5. Select existing django project path (or start from scratch with a new folder) [![Enter existing project path](https://i.stack.imgur.com/PhYIg.png)](https://i.stack.imgur.com/PhYIg.png) 6. Add `DJANGO_SETTINGS_MODULE=yourprojectname.settings` to your run configuration (can be found in `yourprojectname/wsgi.py` file). [![Edit run configuration](https://i.stack.imgur.com/JF0jT.png)](https://i.stack.imgur.com/JF0jT.png) [![Add env variable](https://i.stack.imgur.com/hOuFT.png)](https://i.stack.imgur.com/hOuFT.png) Enjoy your Django development
Try adding `DJANGO_SETTINGS_MODULE=untitled.settings` to the environment variables listed in the configuration menu by clicking the dropdown titled 'Django' in your first photo.
2,302
63,412,757
I am training a variational autoencoder, using pytorch-lightning. My pytorch-lightning code works with a Weights and Biases logger. I am trying to do a parameter sweep using a W&B parameter sweep. The hyperparameter search procedure is based on what I followed from [this repo.](https://github.com/borisdayma/lightning-kitti) The runs initialise correctly, but when my training script is run with the first set of hyperparameters, i get the following error: ``` 2020-08-14 14:09:07,109 - wandb.wandb_agent - INFO - About to run command: /usr/bin/env python train_sweep.py --LR=0.02537477586974176 Traceback (most recent call last): File "train_sweep.py", line 1, in <module> import yaml ImportError: No module named yaml ``` `yaml` is installed and is working correctly. I can train the network by setting the parameters manually, but not with the parameter sweep. Here is my sweep script to train the VAE: ``` import yaml import numpy as np import ipdb import torch from vae_experiment import VAEXperiment import torch.backends.cudnn as cudnn from pytorch_lightning import Trainer from pytorch_lightning.loggers import WandbLogger from pytorch_lightning.callbacks import EarlyStopping from vae_network import VanillaVAE import os import wandb from utils import get_config, log_to_wandb # Sweep parameters hyperparameter_defaults = dict( root='data_semantics', gpus=1, batch_size = 2, lr = 1e-3, num_layers = 5, features_start = 64, bilinear = False, grad_batches = 1, epochs = 20 ) wandb.init(config=hyperparameter_defaults) config = wandb.config def main(hparams): model = VanillaVAE(hparams['exp_params']['img_size'], **hparams['model_params']) model.build_layers() experiment = VAEXperiment(model, hparams['exp_params'], hparams['parameters']) logger = WandbLogger( project='vae', name=config['logging_params']['name'], version=config['logging_params']['version'], save_dir=config['logging_params']['save_dir'] ) wandb_logger.watch(model.net) early_stopping = EarlyStopping( monitor='val_loss', min_delta=0.00, patience=3, verbose=False, mode='min' ) runner = Trainer(weights_save_path="../../Logs/", min_epochs=1, logger=logger, log_save_interval=10, train_percent_check=1., val_percent_check=1., num_sanity_val_steps=5, early_stop_callback = early_stopping, **config['trainer_params'] ) runner.fit(experiment) if __name__ == '__main__': main(config) ``` Why am I getting this error?
2020/08/14
[ "https://Stackoverflow.com/questions/63412757", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10290585/" ]
The problem is that the structure of my code and the way that I was running the wandb commands was not in the correct order. Looking at [this pytorch-ligthning](https://github.com/AyushExel/COVID19WB/blob/master/main.ipynb) with `wandb` is the correct structure to follow. Here is my refactored code: ``` #!/usr/bin/env python import wandb from utils import get_config #--------------------------------------------------------------------------------------------- def main(): """ The training function used in each sweep of the model. For every sweep, this function will be executed as if it is a script on its own. """ import wandb import yaml import numpy as np import torch from vae_experiment import VAEXperiment import torch.backends.cudnn as cudnn from pytorch_lightning import Trainer from pytorch_lightning.loggers import WandbLogger from pytorch_lightning.callbacks import EarlyStopping from vae_network import VanillaVAE import os from utils import log_to_wandb, format_config path_to_config = 'sweep.yaml' config = get_config(path_to_yaml) path_to_defaults = 'defaults.yaml' param_defaults = get_config(path_to_defaults) wandb.init(config=param_defaults) config = format_config(config, wandb.config) model = VanillaVAE(config['meta']['img_size'], hidden_dims = config['hidden_dims'], latent_dim = config['latent_dim']) model.build_layers() experiment = VAEXperiment(model, config) early_stopping = EarlyStopping( monitor='val_loss', min_delta=0.00, patience=3, verbose=False, mode='max' ) runner = Trainer(weights_save_path=config['meta']['save_dir'], min_epochs=1, train_percent_check=1., val_percent_check=1., num_sanity_val_steps=5, early_stop_callback = early_stopping, **config['trainer_params']) runner.fit(experiment) log_to_wandb(config, runner, experiment, path_to_config) #--------------------------------------------------------------------------------------------- path_to_yaml = 'sweep.yaml' sweep_config = get_config(path_to_yaml) sweep_id = wandb.sweep(sweep_config) wandb.agent(sweep_id, function=main) #--------------------------------------------------------------------------------------------- ```
Do you launch python in your shell by typing `python` or `python3`? Your script could be calling python 2 instead of python 3. If this is the case, you can explicitly tell wandb to use python 3. See [this section of documentation](https://docs.wandb.com/sweeps/faq#sweep-with-custom-commands), in particular "Running Sweeps with Python 3".
2,305
44,737,199
I've written a script to select certain field from a webpage using python with selenium. There is a dropdown on that page from which I want to select "All". However, i tried many different ways with my script to make it but could not. Here is how the dropdown look like. [![enter image description here](https://i.stack.imgur.com/WeO1N.jpg)](https://i.stack.imgur.com/WeO1N.jpg) Html elements for the dropdown selection: ``` <select name="ctl00$body$MedicineSummaryControl$cmbPageSelection" onchange="javascript:setTimeout('__doPostBack(\'ctl00$body$MedicineSummaryControl$cmbPageSelection\',\'\')', 0)" id="ctl00_body_MedicineSummaryControl_cmbPageSelection"> <option selected="selected" value="25">25</option> <option value="50">50</option> <option value="100">100</option> <option value="all">All</option> </select> ``` Scripts I've tried with: ``` from selenium import webdriver import time driver = webdriver.Chrome() driver.get('http://apps.tga.gov.au/Prod/devices/daen-entry.aspx') driver.find_element_by_id('disclaimer-accept').click() time.sleep(5) driver.find_element_by_id('medicine-name').send_keys('pump') time.sleep(8) driver.find_element_by_id('medicines-header-text').click() driver.find_element_by_id('submit-button').click() time.sleep(7) #selection for the dropdown should start from here driver.find_element_by_xpath('//select[@id="ctl00_body_MedicineSummaryControl_cmbPageSelection"]').click() driver.find_element_by_xpath('//select//option[@value]').send_keys("All") ```
2017/06/24
[ "https://Stackoverflow.com/questions/44737199", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9189799/" ]
This will work for you: ``` #option1 select_obj = Select(driver.find_element_by_xpath('//select[@id="ctl00_body_MedicineSummaryControl_cmbPageSelection"]')) select_obj.select_by_visible_text('All') #option2 select_obj = Select(driver.find_element_by_id('ctl00_body_MedicineSummaryControl_cmbPageSelection')) select_obj.select_by_visible_text('All') ``` And don't forget to import `Select` with `from selenium.webdriver.support.ui import Select` You can read full documentation to find all `Select` methods here : <https://seleniumhq.github.io/selenium/docs/api/py/webdriver_support/selenium.webdriver.support.select.html>
I initially thought of suggesting that you try to tab from an element that is before the dropdown select, similar to the concept in this code: ``` driver.find_element_by_id('<id of element before the dropdown select>').send_keys(Keys.TAB) driver.find_element_by_id('//select[@id="ctl00_body_MedicineSummaryControl_cmbPageSelection"]').send_keys('AL') driver.find_element_by_id('//select[@id="ctl00_body_MedicineSummaryControl_cmbPageSelection"]').send_keys(Keys.ENTER + Keys.TAB) ``` However, errors from the stack trace may show you that the dropdown select is not being found with that id. I believe that you should verify that the id that you are using is the correct id for that element, when you record the action of clicking on the dropdown select and selecting an option, by using the Record option in the Selenium IDE.
2,306
2,361,328
I generally make my desktop interfaces with [Qt](http://www.pyside.org/), but some recent TK screenshots convince me Tk isn't just ugly motif any more. Additionally [Tkinter](http://docs.python.org/library/tkinter.html) comes bundled with Python, which makes distribution easier. So is it worth learning or should I stick with Qt? [![example tk interface](https://i.stack.imgur.com/yN8ez.gif)](https://i.stack.imgur.com/yN8ez.gif) (source: [kb-creative.net](http://www.kb-creative.net/screenshots/ipats-multifile.gif))
2010/03/02
[ "https://Stackoverflow.com/questions/2361328", "https://Stackoverflow.com", "https://Stackoverflow.com/users/105066/" ]
The answer to your question is a resounding **yes**. Qt is good, I have nothing against it. But Tk is better and far easier to use and quite [well documented](http://wiki.python.org/moin/TkInter) - not just on the Python webspace, but there are also many third-party tutorials out there. [This](http://www.pythonware.com/library/tkinter/introduction/) particular one is where I learned it from and it has been quite infallible in serving me.
As a step up to other GUI toolkits, sure. If you know other toolkits then you already understand TkInter and can leave it until you actually need it.
2,309
60,144,779
My formatting is terrible. Screenshot is here: [![enter image description here](https://i.stack.imgur.com/KrTnL.png)](https://i.stack.imgur.com/KrTnL.png) ```py n = int(input("enter the number of Fibonacci sequence you want. ") n1 = 0 n2 = 1 count = 0 if n <= 0: print("please enter a postive integer") elif n == 1: print("Fibonacci sequence:") print(n1) else: while count < n: print(n1) nth = n1 + n2 n1 = n2 n2 = nth count = count + 1 ``` I cannot figure out why do I get this error: ``` File "<ipython-input-68-9c2ad055a726>", line 3 n1 = 0 ^ SyntaxError: invalid syntax ```
2020/02/10
[ "https://Stackoverflow.com/questions/60144779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12870387/" ]
A ')' is missing in first line i guess, that's an issue.
When such error arises, do check for the preceding line also. There are very high chances of error being in the preceding line, as in this case. There's a `)` missing in the input line. You closed 1 `)` for the input() function, but did not close for `int` constructor.
2,316
70,709,117
i'm using this code to open edge with the defaut profile settings: ``` from msedge.selenium_tools import Edge, EdgeOptions edge_options = EdgeOptions() edge_options.use_chromium = True edge_options.add_argument("user-data-dir=C:\\Users\\PopA2\\AppData\\Local\\Microsoft\\Edge\\User Data\\Default") edge_options.add_argument("profile-directory=Profile 1") edge_options.binary_location = r"C:\\Users\\PopA2\\Downloads\\edgedriver_win64 (1)\\msedgedriver.exe" driver = Edge(options = edge_options, executable_path = "C:\\Users\\PopA2\\Downloads\\edgedriver_win64 (1)\\msedgedriver.exe") driver.get('https://google.com') driver.quit() ``` but i am getting this error: > > PS C:\Users\PopA2> & "C:/Program Files/Python37/python.exe" > "c:/Users/PopA2/OneDrive/Desktop/test de pe net.py" Traceback (most > recent call last): File "c:/Users/PopA2/OneDrive > Group/Desktop/test de pe net.py", line 13, in > driver = Edge(options = edge\_options, executable\_path = "C:\Users\PopA2\Downloads\edgedriver\_win64 (1)\msedgedriver.exe") > File "C:\Program > Files\Python37\lib\site-packages\msedge\selenium\_tools\webdriver.py", > line 108, in **init** > desired\_capabilities=desired\_capabilities) File "C:\Program Files\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", > line 157, in **init** > self.start\_session(capabilities, browser\_profile) File "C:\Program > Files\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", > line 252, in start\_session > response = self.execute(Command.NEW\_SESSION, parameters) File "C:\Program > Files\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", > line 321, in execute > self.error\_handler.check\_response(response) File "C:\Program Files\Python37\lib\site-packages\selenium\webdriver\remote\errorhandler.py", > line 242, in check\_response > raise exception\_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: unknown error: > MSEdge failed to start: was killed. (unknown error: > DevToolsActivePort file doesn't exist) (The process started from > msedge location C:\Users\PopA2\Downloads\edgedriver\_win64 > (1)\msedgedriver.exe is no longer running, so MSEdgeDriver is > assuming that MSEdge has crashed.) > > >
2022/01/14
[ "https://Stackoverflow.com/questions/70709117", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17603014/" ]
there is an issue in your style code.if you remove it than works smoothly ```html <!DOCTYPE html> <html> <head> <title>Page Title</title> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.8.1/slick.min.css"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.8.1/slick-theme.css"> <!-- Bootstrap core CSS --> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.3.1/css/bootstrap.min.css"> </head> <body> <div class="carousel-equal-heights"> <!--Responsive Slider--> <div class="row"> <div class="col-md-12"> <div class="responsive-slider"> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 1</h4> <p class="card-text">This is a txt for testing </p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4" id="2"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="#"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 2</h4> <p class="card-text">This is a txt for testing </p> <a href="h#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4" id="3"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 3</h4> <p class="card-text">This is a txt for testing</p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 4</h4> <p class="card-text">This is a txt for testing</p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 5</h4> <p class="card-text">This is a txt for testing</p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 6</h4> <p class="card-text">This is a txt for testing </p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 7</h4> <p class="card-text">This is a txt for testing</p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 8</h4> <p class="card-text">This is a txt for testing</p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 9</h4> <p class="card-text">This is a txt for testing </p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> </div> </div> </div> </div> <!--End Of Container--> </body> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.8.1/slick.min.js"></script> <script> $(document).ready(function () { //Responsive slider $('.responsive-slider').slick({ dots: true, arrows: false, infinite: true, slidesToShow: 3, slidesToScroll: 3, autoplay: true, autoplaySpeed: 2000, //DELAY BEFORE NEXT SLIDE IN MILISECONDS speed: 800, responsive: [ { breakpoint: 768, settings: { slidesToShow: 2, slidesToScroll: 1, } }, { breakpoint: 480, settings: { slidesToShow: 1, slidesToScroll: 1 } } ] }); }); </script> <script type="text/javascript"> $(document).ready(function () { let $carouselItems = $('.carousel-equal-heights').find('.col-md-4'); updateItemsHeight(); $(window).resize(updateItemsHeight); function updateItemsHeight() { // remove old value $carouselItems.height('auto'); // calculate new one let maxHeight = 0; $carouselItems.each(function () { maxHeight = Math.max(maxHeight, $(this).outerHeight()); }); // set new value $carouselItems.each(function () { $(this).outerHeight(maxHeight); }); // debug it console.log('new items height', maxHeight); } }); </script> </body> </html> ```
for navigation design add these style to your code ```html <!DOCTYPE html> <html> <head> <title>Page Title</title> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.8.1/slick.min.css"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.8.1/slick-theme.css"> <!-- Bootstrap core CSS --> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.3.1/css/bootstrap.min.css"> <style> .row{ margin-left:0px; margin-right:0px; } .responsive-slider { padding:1em 0; } .slick-prev{ left:0px; } .slick-next{ right:0px; } .slick-prev, .slick-next{ font-size:0; top: 35%; z-index: 1; } .slick-prev:before, .slick-next:before{ color: #104975; font-size: 32px; opacity: 9; } .slick-dots li button:before{ font-size: 15px; opacity: 9; color: #0d4775; } </style> </head> <body> <div class="carousel-equal-heights"> <!--Responsive Slider--> <div class="row"> <div class="col-md-12"> <div class="responsive-slider"> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 1</h4> <p class="card-text">This is a txt for testing </p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4" id="2"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="#"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 2</h4> <p class="card-text">This is a txt for testing </p> <a href="h#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4" id="3"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 3</h4> <p class="card-text">This is a txt for testing</p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 4</h4> <p class="card-text">This is a txt for testing</p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 5</h4> <p class="card-text">This is a txt for testing</p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 6</h4> <p class="card-text">This is a txt for testing </p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 7</h4> <p class="card-text">This is a txt for testing</p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 8</h4> <p class="card-text">This is a txt for testing</p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> <div class="col-md-4"> <div class="card mb-2 shadow-none h-100" style="border: 1px solid #ededed;"> <img class="card-img-top" src="https://mdbootstrap.com/img/Photos/Horizontal/Nature/4-col/img%20(18).jpg" alt="Card image cap"> <div class="card-body d-flex flex-column"> <h4 class="card-title">Test 9</h4> <p class="card-text">This is a txt for testing </p> <a href="#" class="btn btn-primary mt-auto align-self-start ml-0">Details</a> </div> </div> </div> </div> </div> </div> </div> <!--End Of Container--> </body> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.8.1/slick.min.js"></script> <script> $(document).ready(function () { //Responsive slider $('.responsive-slider').slick({ dots: true, arrows: true, infinite: true, slidesToShow: 3, slidesToScroll: 3, autoplay: true, autoplaySpeed: 2000, //DELAY BEFORE NEXT SLIDE IN MILISECONDS speed: 800, responsive: [ { breakpoint: 768, settings: { slidesToShow: 2, slidesToScroll: 1, } }, { breakpoint: 480, settings: { slidesToShow: 1, slidesToScroll: 1 } } ] }); }); </script> <script type="text/javascript"> $(document).ready(function () { let $carouselItems = $('.carousel-equal-heights').find('.col-md-4'); updateItemsHeight(); $(window).resize(updateItemsHeight); function updateItemsHeight() { // remove old value $carouselItems.height('auto'); // calculate new one let maxHeight = 0; $carouselItems.each(function () { maxHeight = Math.max(maxHeight, $(this).outerHeight()); }); // set new value $carouselItems.each(function () { $(this).outerHeight(maxHeight); }); // debug it console.log('new items height', maxHeight); } }); </script> </body> </html> ```
2,317
30,078,967
I want to create new form view associated to new data model, I create a new menu item "menu1" that has a submenu "menus" and then, I want to customize the action view. This is my code: **My xml file:** **My data model:** ```python from openerp.osv import fields, osv class hr_cutomization(osv.osv): _inherit = "hr.employee" _columns = { 'new_field_ID': fields.char('new filed ID',size=11) } _default={ 'new_field_ID':0 } hr_cutomization() class hr_newmodel(osv.osv): _name = "hr.newmodel" _columns = { 'field1': fields.char('new filed1',size=11), 'field2': fields.char('new filed2',size=11) } _default={ 'field1':0 } hr_newmodel() ``` When I update my module, I got this error: > > ParseError: "ValidateError > Field(s) `arch` failed against a constraint: Invalid view definition > Error details: > Element ' > > > what's doing wrong in my code ?
2015/05/06
[ "https://Stackoverflow.com/questions/30078967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4018649/" ]
Manage to sort it out with the following. ``` Add-WebConfigurationProperty //system.webServer/httpProtocol/customHeaders "IIS:\sites\test.test1.com" -AtIndex 0 -Name collection -Value @{name='Access-Control-Allow-Origin';value='*'} Add-WebConfigurationProperty //system.webServer/httpProtocol/customHeaders "IIS:\sites\test.test1.com" -AtIndex 0 -Name collection -Value @{name='Access-Control-Allow-Headers';value='Content-Type'} Add-WebConfigurationProperty //system.webServer/httpProtocol/customHeaders "IIS:\sites\test.test1.com" -AtIndex 0 -Name collection -Value @{name='Access-Control-Allow-Methods';value='GET, OPTIONS'} ``` Hope this helps someone in future, wasted allot of time on this.
I think your XPath expression doesn't match the node you're trying to manipulate. Try this: ``` Add-WebConfigurationProperty -PSPath $sitePath ` -Filter 'system.webServer/httpProtocol/customHeaders/add[@name="Access-Control-Allow-Origin"]' ` -Name 'value' -Value '*' -Force ```
2,322
38,390,242
I work with python-pandas dataframes, and I have a large dataframe containing users and their data. Each user can have multiple rows. I want to sample 1-row per user. My current solution seems not efficient: ``` df1 = pd.DataFrame({'User': ['user1', 'user1', 'user2', 'user3', 'user2', 'user3'], 'B': ['B', 'B1', 'B2', 'B3','B4','B5'], 'C': ['C', 'C1', 'C2', 'C3','C4','C5'], 'D': ['D', 'D1', 'D2', 'D3','D4','D5'], 'E': ['E', 'E1', 'E2', 'E3','E4','E5']}, index=[0, 1, 2, 3,4,5]) df1 >> B C D E User 0 B C D E user1 1 B1 C1 D1 E1 user1 2 B2 C2 D2 E2 user2 3 B3 C3 D3 E3 user3 4 B4 C4 D4 E4 user2 5 B5 C5 D5 E5 user3 userList = list(df1.User.unique()) userList > ['user1', 'user2', 'user3'] ``` The I loop over unique users list and sample one row per user, saving them to a different dataframe ``` usersSample = pd.DataFrame() # empty dataframe, to save samples for i in userList: usersSample=usersSample.append(df1[df1.User == i].sample(1)) > usersSample B C D E User 0 B C D E user1 4 B4 C4 D4 E4 user2 3 B3 C3 D3 E3 user3 ``` Is there a more efficient way of achieving that? I'd really like to: 1) avoid appending to dataframe usersSample. This is gradually growing object and it seriously kills performance. And 2) avoid looping over users one at a time. Is there a way to sample 1-per-user more efficiently?
2016/07/15
[ "https://Stackoverflow.com/questions/38390242", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4358785/" ]
This is what you want: ``` df1.groupby('User').apply(lambda df: df.sample(1)) ``` [![enter image description here](https://i.stack.imgur.com/C1B60.png)](https://i.stack.imgur.com/C1B60.png) Without the extra index: ``` df1.groupby('User', group_keys=False).apply(lambda df: df.sample(1)) ``` [![enter image description here](https://i.stack.imgur.com/cLAWS.png)](https://i.stack.imgur.com/cLAWS.png)
``` df1_user_sample_one = df1.groupby('User').apply(lambda x:x.sample(1)) ``` Using DataFrame.groupby.apply and lambda function to sample 1
2,323
54,727,804
I have a generator function which reads lines from a file and parses them to objects. The files are far too large to consider processing the entire file into a list which is why I've used the generator and not a list. I'm concerned because when calling the generator, my code will sometimes break. if it finds what it is looking for it can chose to stop before reading every object from the file. I don't really understand what happens to the abandoned generator, or more importantly I don't know what happens to the open file handle. I want to avoid resource leaks here. --- Example code: ``` def read_massive_file(file_path) with open(file=file_path, mode='r', encoding='utf-8') as source_file: for line in source_file: yield parse_entry(line) for entry in read_massive_file(my_file): if is_the_entry_i_need(entry): break else: # not found pass ``` --- My question is: will the above code leave my source file open, or will python find a way to close it? Does the fact I consume from a `for` loop change anything? If I manually obtained an iterator for `read_massive_file()` and called `next()` a few times before abandoning the iterator, would I see the same result?
2019/02/16
[ "https://Stackoverflow.com/questions/54727804", "https://Stackoverflow.com", "https://Stackoverflow.com/users/453851/" ]
This only releases resources promptly on CPython. To really be careful about resource release in this situation, you'd have to do something like ``` with contextlib.closing(read_massive_file(my_file)) as gen: for entry in gen: ... ``` but I've never seen anyone do it. --- When a generator is discarded without fully exhausting it, the generator's `__del__` method will throw a `GeneratorExit` exception into the generator, to trigger `__exit__` methods and `finally` blocks. On CPython, this happens as soon as the loop breaks and the only reference to the generator is discarded, but on other implementations, like PyPy, it may only happen when a GC cycle runs, or not at all if the GC doesn't run before the end of the program. The `GeneratorExit` will trigger file closure in your case. It's possible to accidentally catch the `GeneratorExit` and keep going, in which case proper cleanup may not trigger, but your code doesn't do that.
You never save the return value of `read_massive_file`; the only reference is held internally by the code generated by the `for` loop. As soon as that loop completes, the generator should be garbage collected. It would be different if you had written ``` foo = read_massive_file(my_file): for entry in foo: ... else: ... ``` Now you would have to wait until `foo` went out of scope (or called `del foo` explicitly) before the generator could be collected.
2,329
20,739,353
Recently, I've found plot.ly site and am trying to use it. But, When I use Perl API, I can't success. My steps are same below. 1. I sign up plot.ly with google account 2. Installed Perl module(WebService::Plotly) 3. Type basic example("<https://plot.ly/api/perl/docs/line-scatter>") ..skip.. ``` use WebService::Plotly; use v5.10; use utf8; my $user = "MYID"; my $key = "MYKEY"; my $py= WebService::Plotly->new( un => $user, key => $key ); say __LINE__; # first say my $x0 = [1,2,3,4]; my $y0 = [10,15,13,17]; my $x1 = [2,3,4,5]; my $y1 = [16,5,11,9]; my $response = $py->plot($x0, $y0, $x1, $y1); say __LINE__ ; # second say ``` ..skip... Then, Execute example perl code =>> But, In this step, $py->plot always returned "HTTP::Response=HASH(0x7fd1a4236918)" and second say is not exeucted ( I used Perl version 5.16.2 and 5.19.1, OS is MacOS X) On the hands, python example("<https://plot.ly/api/python/docs/line-scatter>") is always succeed. Please, let me know this problem. Thanks a lot!
2013/12/23
[ "https://Stackoverflow.com/questions/20739353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3128831/" ]
please check below things, have found for you from some diff links: ``` 1. Make sure that PHP is installed. This sounds silly, but you never know. 2. Make sure that the PHP module is listed and uncommented inside of your Apache's httpd.conf This should be something like LoadModule php5_module "c:/php/php5apache2_2.dll" in the file. Search for LoadModule php, and make sure that there is no comment (;) in front of it. 3. Make sure that the http.conf file has the PHP MIME type in it. This should be something like AddType application/x-httpd-php .php. This tells Apache to run .php files as PHP. Search for AddType, and then make sure there is an entry for PHP, and that it is uncommented. 4. Make sure your file has the .php extension on it, otherwise it will not be executed as PHP. 5. Make sure you are not using short tags in the PHP file (<?), these are deprecated, and usually disabled. Use <?php instead. Actually run your file over said webserver using an URL like http://localhost/file.php not via local access file://localhost/www/file.php ``` Or check <http://php.net/install> thanks
I had the same problem with Debian 10 (buster) and PHP 7.3.19.1 and apache2 version 2.4.38 and phpmyadmin 5.02. The file `usr/share/phpmyadmin/index.php` was not interpreted. After verifying all the manual installation I ran the following commands: ``` apt-get update apt-get install libapache2-mod-php7.3 systemctl restart apache2 ``` and finally it worked. The module PHP for apache2 was not available.
2,330
55,619,345
I am making a card game in python. I used the code for a class of a stack that I found online : ``` class Stack: def __init__(self): self.items = [] def isEmpty(self): return self.items == [] def push(self, item): self.items.insert(0,item) def pop(self): return self.items.pop(0) def peek(self): return self.items[0] ``` When I run this its all fine however when I try to call any of the behaviours my program asks me to pass in a value for self, as if it was a parameter. I feel like Im going crazy... When this code is run : ``` Cards = [] Cards = Stack() Cards = Stack.push(15) Cards = Stack.peek() Cards = Stack.pop() ``` When the 3rd line is run this error is displayed : ``` TypeError: push() missing 1 required positional argument: 'item' ``` When I pass in the value of None like this ``` Cards = Stack.push(None,15) ``` I am left with another error : ``` self.items.insert(0,item) AttributeError: 'NoneType' object has no attribute 'items' ```
2019/04/10
[ "https://Stackoverflow.com/questions/55619345", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7193131/" ]
After declaring `Cards` to be an instance of `Stack`, you don't need to refer to `Stack` anymore. Just use `Cards`. ``` Cards = Stack() Cards.push(15) x = Cards.peek() y = Cards.pop() ``` Also, the first line of code `Cards = []` is useless, as you immediately reassign `Cards` to be something else.
You shouldn't reassign `Cards` on each line. `Cards` is the `Stack` object, it needs to stay the same. It should be used as the variable with which you call all the other methods. ``` Cards = Stack() Cards.push(15) item = Cards.peek() item2 = Cards.pop() # item == item2 ```
2,340
37,738,498
I'm running into a problem I've never encountered before, and it's frustrating the hell out of me. I'm using `rpy2` to interface with `R` from within a python script and normalize an array. For some reason, when I go to piece my output together and print to a file, it takes **ages** to print. It also slows down as it proceeds until it's dripping maybe a few kb of data to output per minute. My input file is large (366 MB), but this is running on a high performance computing cluster with near *unlimited* resources. It should have no problem slamming through this. Here's where I'm actually doing the normalization: ``` matrix = sample_list # two-dimensional array v = robjects.FloatVector([ element for col in matrix for element in col ]) m = robjects.r['matrix'](v, ncol = len(matrix), byrow=False) print("Performing quantile normalization.") Rnormalized_matrix = preprocessCore.normalize_quantiles(m) normalized_matrix = np.array(Rnormalized_matrix) ``` As you can see, I end up with a `numpy.array` object containing my now-normalized data. I have another list containing other strings I want to print to the output as well, each element corresponding to an element of the numpy array. So I iterate through, joining each row of the array into a string and print both to output. ``` for thing in pos_list: # List of strings corresponding with each row of array. thing_index = pos_list.index(thing) norm_data = normalized_matrix[thing_index] out_data = "\t".join("{0:.2f}".format(piece) for piece in norm_data) print(thing + "\t" + out_data, file=output) ``` I'm no pro, but I have no idea why things are slowing down so much. Any insight or suggestions would be very, very appreciated. I can post more/the rest of the script if anyone thinks it may be helpful. **Update:** Thanks to @lgautier for his profiling suggestion. Using the `line_profiler` module, I was able to pinpoint my issue to the line: `thing_index = pos_list.index(thing)` This makes sense since this list is very long, and it also explains the slow down as the script proceeds. Simply using a count instead fixed the issue. Profiling of original code (notice the % for the specified line): ``` Line # Hits Time Per Hit % Time Line Contents 115 1 16445761 16445761.0 15.5 header, pos_list, normalized_matrix = Quantile_Normalize(in 117 1 54 54.0 0.0 print("Creating output file...") 120 1 1450 1450.0 0.0 output = open(output_file, "w") 122 1 8 8.0 0.0 print(header, file=output) 124 # Iterate through each position and print QN'd data 125 100000 74600 0.7 0.1 for thing in pos_list: 126 99999 85244758 852.5 80.3 thing_index = pos_list.index(thing) 129 99999 158741 1.6 0.1 norm_data = normalized_matrix[thing_index] 130 99999 3801631 38.0 3.6 out_data = "\t".join("{0:.2f}".format(piece) for pi 132 99999 384248 3.8 0.4 print(thing + "\t" + out_data, file=output) 134 1 3641 3641.0 0.0 output.close() ``` Profiling new code: ``` Line # Hits Time Per Hit % Time Line Contents 115 1 16177130 16177130.0 82.5 header, pos_list, normalized_matrix = Quantile_Normalize(input_file, data_start) 116 117 1 55 55.0 0.0 print("Creating output file...") 118 119 120 1 26157 26157.0 0.1 output = open(output_file, "w") 121 122 1 11 11.0 0.0 print(header, file=output) 123 124 # Iterate through each position and print QN'd data 125 1 1 1.0 0.0 count = 0 126 100000 62709 0.6 0.3 for thing in pos_list: 127 99999 58587 0.6 0.3 thing_index = count 128 99999 67164 0.7 0.3 count += 1 131 99999 85664 0.9 0.4 norm_data = normalized_matrix[thing_index] 132 99999 2877634 28.8 14.7 out_data = "\t".join("{0:.2f}".format(piece) for piece in norm_data) 134 99999 240654 2.4 1.2 print(thing + "\t" + out_data, file=output) 136 1 1713 1713.0 0.0 output.close() ```
2016/06/10
[ "https://Stackoverflow.com/questions/37738498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4438552/" ]
If I am understanding this correctly everything is running fine and with good performance up to (and including) the line: ``` normalized_matrix = np.array(Rnormalized_matrix) ``` At that line the resulting matrix is turned into a numpy array (literally - it can be even faster when avoiding to copy the data, as in <http://rpy2.readthedocs.io/en/version_2.8.x/numpy.html?from-rpy2-to-numpy> ). I cannot see the performance issues in the rest of the script related to rpy2. Now what might be happening is that this is not because it says "HPC" on the label that it is high-performance in any and every situation with all code. Did you consider running that slow last loop through a code profiler ? It would tell you where the time is spent.
For one thing, I usually use a generator to avoid the temporary list of many tiny strings. ``` out_data = "\t".join("{0:.2f}".format(piece) for piece in norm_data) ``` But it's hard to tell if this part was the slow one.
2,341
64,334,348
**Question:** What is the difference between `open(<name>, "w", encoding=<encoding>)` and `open(<name>, "wb") + str.encode(<encoding>)`? They seem to (sometimes) produce different outputs. **Context:** While using [PyFPDF](https://pypi.org/project/fpdf/) (version 1.7.2), I subclassed the `FPDF` class, and, among other things, added my own output method (taking `pathlib.Path` objects). While looking at the source of the original `FPDF.output()` method, I noticed almost all of it is argument parsing - the only relevant bits are ```py #Finish document if necessary if(self.state < 3): self.close() [...] f=open(name,'wb') if(not f): self.error('Unable to create output file: '+name) if PY3K: # manage binary data as latin1 until PEP461 or similar is implemented f.write(self.buffer.encode("latin1")) else: f.write(self.buffer) f.close() ``` Seeing that, my own Implementation looked like this: ```py def write_file(self, file: Path) -> None: if self.state < 3: # See FPDF.output() self.close() file.write_text(self.buffer, "latin1", "strict") ``` This seemed to work - a .pdf file was created at the specified path, and chrome opened it. But it was completely blank, even tho I added Images and Text. After hours of experimenting, I finally found a Version that worked (produced a non empty pdf file): ```py def write_file(self, file: Path) -> None: if self.state < 3: # See FPDF.output() self.close() # using .write_text(self.buffer, "latin1", "strict") DOES NOT WORK AND I DON'T KNOW WHY file.write_bytes(self.buffer.encode("latin1", "strict")) ``` Looking at the `pathlib.Path` source, it uses `io.open` for `Path.write_text()`. As all of this is Python 3.8, `io.open` and the buildin `open()` [are the same](https://docs.python.org/3/library/io.html#io.open). **Note:** `FPDF.buffer` is of type `str`, but holds binary data (a pdf file). Probably because the Library was originally written for Python 2.
2020/10/13
[ "https://Stackoverflow.com/questions/64334348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5278549/" ]
You can only have a single transaction in progress at a time with a producer instance. If you have multiple threads doing separate processing and they all need exactly once semantics, you should have a producer instance per thread.
Not sure if this was resolved. you can use apache common pool2 to create a producer instance pool. In the create() method of the factory implementation you can generate and assign a unique transactionalID to avoid a conflict (ProducerFencedException)
2,342
50,026,785
I need to download a package using pip. I ran `pip install <package>` but got the following error: ``` [user@server ~]$ pip install sistr_cmd Collecting sistr_cmd Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection. VerifiedHTTPSConnection object at 0x7f518ee0cd90>: Failed to establish a new connection: [Errno 101] Network is unreachable',)': /simple/sistr-cmd/ Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection. VerifiedHTTPSConnection object at 0x7f518ee0c290>: Failed to establish a new connection: [Errno 101] Network is unreachable',)': /simple/sistr-cmd/ Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection. VerifiedHTTPSConnection object at 0x7f518ee0c510>: Failed to establish a new connection: [Errno 101] Network is unreachable',)': /simple/sistr-cmd/ Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection. VerifiedHTTPSConnection object at 0x7f518ee0cf10>: Failed to establish a new connection: [Errno 101] Network is unreachable',)': /simple/sistr-cmd/ Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<pip._vendor.requests.packages.urllib3.connection. VerifiedHTTPSConnection object at 0x7f518ee0c190>: Failed to establish a new connection: [Errno 101] Network is unreachable',)': /simple/sistr-cmd/ Could not find a version that satisfies the requirement sistr_cmd (from versions: ) No matching distribution found for sistr_cmd ``` I verified that the source of the problem is the network blocking most sites because I am working behind a proxy (required by the organization). To allow these downloads, I need to compile the list of urls of the source of the downloads and send it to the network admins to unblock. According to pip documentation (cited and explained in brief in the pip Wikipedia article), “Many packages can be found in the default source for packages and their dependencies — Python Package Index (PyPI)," so I went to the PyPI page for Biopython and found the github repository and the required dependencies for the package. There are also download links on the PyPI page and I want to be sure that all sources for the download are allowed. So does pip install from the original source of a package (the github repository or wherever the original package is hosted), the packages listed in the PyPI page under downloads, or does it search through both? Thank you in advance for the help.
2018/04/25
[ "https://Stackoverflow.com/questions/50026785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8967045/" ]
``` myzipWith :: (a->b->c) -> [a] -> [b] ->[c] myzipWith func [] [] = [] myzipWith func (headA:restA) (headB:restB) = [func headA headB] ++ myzipWith func restA restB ``` But note the append (`++`) isn't necessary. This would be more idiomatic (and efficient): ``` func headA headB : myzipWith func restA restB ```
``` myzipWith func (a:as) (b:bs) = [func a b] ++ (myzipWith func as bs) ``` The syntax `function (x:xs)` splits the list passed to `function` into two parts: the first element `x` and the rest of the list `xs`.
2,343
25,296,807
Is it possible in python to create an un-linked copy of a function? For example, if I have ``` a = lambda(x): x b = lambda(x): a(x)+1 ``` I want `b(x)` to always `return x+1`, regardless if `a(x)` is modified not. Currently, if I do ``` a = lambda(x): x b = lambda(x): a(x)+1 print a(1.),b(1.) a = lambda(x): x*0 print a(1.),b(1.) ``` the output is ``` 1. 2. 0. 1. ``` Instead of being ``` 1. 2. 0. 2. ``` as I would like to. Any idea on how to implement this? It seems that using `deepcopy` does not help for functions. Also keep in mind that `a(x)` is created externally and I can't change its definition. I've also looked into using [this method](https://stackoverflow.com/questions/13503079/how-to-create-a-copy-of-a-python-function), but it did not help.
2014/08/13
[ "https://Stackoverflow.com/questions/25296807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3939154/" ]
You could define b like this: ``` b = lambda x, a=a: a(x)+1 ``` This makes `a` a parameter of `b`, and therefore a local variable. You default it to the value of `a` in the current environment, so `b` will hold onto that value. You don't need to copy `a`, just keep its current value, so that if a new value is created, you have the one you wanted. That said, this sounds like something unusual happening, and if you tell us more about what's going on, there's likely a better answer.
I might need to know a little more about your constraints before I can give a satisfactory answer. Why couldn't you do something like ``` a = lambda(x): x c = a b = lambda(x): c(x)+1 ``` Then no matter what happens to `a`, `b` will stay the same. This works because of the somewhat unusual way that assignment works in Python. When you do `c = a`, the name `c` is linked to the object that `a` links to. You can use `id()` to look at that object's id and see that they are the same; `c` and `a` point to the same object. ``` c = a id(a) >>> 4410483968 id(c) >>> 4410483968 ``` Then, when you redefine `a` with `a = lambda x: x*0`, you're actually doing two things in one line. First, `lambda x: x*0` creates a new function object, then the assignment causes the name `a` to be linked to the new object. As you can see, `a` is now pointing to a different object: ``` a = lambda x: x*0 id(a) >>>> 4717817680 ``` But if you look at `id(c)`, it still points to the old object! ``` id(c) >>>> 4410483968 ``` This is because when we redefined `a`, we merely created a new object and linked `a` to it. `c` remains linked to the old one. So if you redefine `a` as you do in the question, you get the output you want: ``` print a(1.),b(1.) >>>> 0.0,2.0 ```
2,344
62,707,514
As we all know, filling out the web forms automatically is possible using JavaScript. Basically, We find the ID of related element using Inspect (Ctrl + I) in i.e Chrome and write a javascript code in the chrome console to automate what we want to do by code. Just like that, is it possible to automate desktop apps using python? if yes how ?
2020/07/03
[ "https://Stackoverflow.com/questions/62707514", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13177703/" ]
You can do this in python using **selenium**. Selenium is an open-source testing tool, used for functional testing and also compatible with non-functional testing. You can refer to this [link](https://www.guru99.com/selenium-python.html) to get started.
[Pywinauto](https://pywinauto.github.io/) is a GUI automation library written in pure Python and well developed for Windows GUI.
2,345
48,949,121
i have a python script that read from CSV file and check if the records meet the conditions. * if yes the system display the result * if no the system raise Exception based on the Error. the csv file includes a filed that has **float values** but some of these records may not have any value so will be empty. the problem is if the cell is empty the system display this ValueError : ``` could not convert string to float: ``` and not the Exception that i wrote it. ``` raise Exception("this Record has empty value") ``` * row[0]==> Date type Date * row[10]==> wind speed type float * row[11]==> fog type boolean code: ===== ``` import csv mydelimeter = csv.excel() mydelimeter.delimiter=";" myfile = open("C:/Users/test/Documents/R_projects/homework/rdu-weather-history.csv") # read the first line in the opened file ==> Header myfile.readline() myreader=csv.reader(myfile,mydelimeter) mywind,mydate=[],[] minTemp, maxTemp = [],[] fastwindspeed, fog=[],[] ''' create a variable that handle values of the 3 fields ==> Date - fastest5secwindspeed - fog and display the result where fog ==> Yes and highest speed more than 10. ''' for row in myreader: try: if row[11] =="Yes": if float(row[10]) < 10.0: raise Exception( 'the wind speed is below 10 mph in ' + row[0] ) if row[10] in (None, ""): raise Exception("this Record has empty value") print(row[0],row[10],row[11]) except Exception as e: print("{}".format(e)) myfile.close() ```
2018/02/23
[ "https://Stackoverflow.com/questions/48949121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9162690/" ]
That behaviour is often caused by an updated installation of MongoDB. There is a "feature compatibility level" switch built into MongoDB which allows for updates to a newer version that do not alter (some of) the behaviour of the old version in a non-expected (oh well) way. The [documentation](https://docs.mongodb.com/manual/reference/command/setFeatureCompatibilityVersion/#default-values) for upgrades from v3.4 to v3.6 states on that topic: > > For deployments upgraded from 3.4 [the default feature compatibility level is] "3.4" until you setFeatureCompatibilityVersion to "3.6". > > > You can fix this by running the following command: ``` db.adminCommand( { setFeatureCompatibilityVersion: "3.6" } ) ```
To everyone in the same case, the solution dnickless gave works for me: > > In case you've upgrade from an older version try running this: > > > `db.adminCommand( { setFeatureCompatibilityVersion: "3.6" } )` > > >
2,346
16,453,644
I have a Pandas DataFrame with a `date` column (eg: `2013-04-01`) of dtype `datetime.date`. When I include that column in `X_train` and try to fit the regression model, I get the error `float() argument must be a string or a number`. Removing the `date` column avoided this error. What is the proper way to take the `date` into account in the regression model? **Code** ``` data = sql.read_frame(...) X_train = data.drop('y', axis=1) y_train = data.y rf = RandomForestRegressor().fit(X_train, y_train) ``` **Error** ``` TypeError Traceback (most recent call last) <ipython-input-35-8bf6fc450402> in <module>() ----> 2 rf = RandomForestRegressor().fit(X_train, y_train) C:\Python27\lib\site-packages\sklearn\ensemble\forest.pyc in fit(self, X, y, sample_weight) 292 X.ndim != 2 or 293 not X.flags.fortran): --> 294 X = array2d(X, dtype=DTYPE, order="F") 295 296 n_samples, self.n_features_ = X.shape C:\Python27\lib\site-packages\sklearn\utils\validation.pyc in array2d(X, dtype, order, copy) 78 raise TypeError('A sparse matrix was passed, but dense data ' 79 'is required. Use X.toarray() to convert to dense.') ---> 80 X_2d = np.asarray(np.atleast_2d(X), dtype=dtype, order=order) 81 _assert_all_finite(X_2d) 82 if X is X_2d and copy: C:\Python27\lib\site-packages\numpy\core\numeric.pyc in asarray(a, dtype, order) 318 319 """ --> 320 return array(a, dtype, copy=False, order=order) 321 322 def asanyarray(a, dtype=None, order=None): TypeError: float() argument must be a string or a number ```
2013/05/09
[ "https://Stackoverflow.com/questions/16453644", "https://Stackoverflow.com", "https://Stackoverflow.com/users/741099/" ]
The best way is to explode the date into a set of categorical features encoded in boolean form using the 1-of-K encoding (e.g. as done by [DictVectorizer](http://scikit-learn.org/stable/modules/feature_extraction.html#loading-features-from-dicts)). Here are some features that can be extracted from a date: * hour of the day (24 boolean features) * day of the week (7 boolean features) * day of the month (up to 31 boolean features) * month of the year (12 boolean features) * year (as many boolean features as they are different years in your dataset) ... That should make it possible to identify linear dependencies on periodic events on typical human life cycles. Additionally you can also extract the date a single float: convert each date as the number of days since the min date of your training set and divide by the difference of the number of days between the max date and the number of days of the min date. That numerical feature should make it possible to identify long term trends between the output of the event date: e.g. a linear slope in a regression problem to better predict evolution on forth-coming years that cannot be encoded with the boolean categorical variable for the year feature.
You have two options. You can convert the date to an ordinal i.e. an integer representing the number of days since year 1 day 1. You can do this by a `datetime.date`'s `toordinal` function. Alternatively, you can turn the dates into categorical variables using sklearn's [OneHotEncoder](http://scikit-learn.org/dev/modules/generated/sklearn.preprocessing.OneHotEncoder.html). What it does is create a new variable for each distinct date. So instead of something like column `date` with values `['2013-04-01', '2013-05-01']`, you will have two columns, `date_2013_04_01` with values `[1, 0]` and `date_2013_05_01` with values `[0, 1]`. I would recommend using the `toordinal` approach if you have many different dates, and the one hot encoder if the number of distinct dates is small (let's say up to 10 - 100, depending on the size of your data and what sort of relation the date has with the output variable).
2,347
46,191,793
I followed the guide here: <https://plot.ly/python/filled-chord-diagram/> And I produced this: [![enter image description here](https://i.stack.imgur.com/wVzNc.png)](https://i.stack.imgur.com/wVzNc.png) In the guide, I followed the `ribbon_info` code to add hoverinfo to the connecting ribbons but nothing shows. I can get the hoverinfo to only show for the ribbon ends. Can anyone see where I am going wrong? ``` ribbon_info=[] for k in range(L): sigma=idx_sort[k] sigma_inv=invPerm(sigma) for j in range(k, L): if matrix[k][j]==0 and matrix[j][k]==0: continue eta=idx_sort[j] eta_inv=invPerm(eta) l=ribbon_ends[k][sigma_inv[j]] if j==k: layout['shapes'].append(make_self_rel(l, 'rgb(175,175,175)' , ideo_colors[k], radius=radii_sribb[k])) z=0.9*np.exp(1j*(l[0]+l[1])/2) #the text below will be displayed when hovering the mouse over the ribbon text=labels[k]+' appears on'+ '{:d}'.format(matrix[k][k])+' of the same grants as '+ '', ribbon_info.append(Scatter(x=z.real, y=z.imag, mode='markers', marker=Marker(size=5, color=ideo_colors[k]), text=text, hoverinfo='text' ) ) else: r=ribbon_ends[j][eta_inv[k]] zi=0.9*np.exp(1j*(l[0]+l[1])/2) zf=0.9*np.exp(1j*(r[0]+r[1])/2) #texti and textf are the strings that will be displayed when hovering the mouse #over the two ribbon ends texti=labels[k]+' appears on '+ '{:d}'.format(matrix[k][j])+' of the same grants as '+\ labels[j]+ '', textf=labels[j]+' appears on '+ '{:d}'.format(matrix[j][k])+' of the same grants as '+\ labels[k]+ '', ribbon_info.append(Scatter(x=zi.real, y=zi.imag, mode='markers', marker=Marker(size=0.5, color=ribbon_color[k][j]), text=texti, hoverinfo='text' ) ), ribbon_info.append(Scatter(x=zf.real, y=zf.imag, mode='markers', marker=Marker(size=0.5, color=ribbon_color[k][j]), text=textf, hoverinfo='text' ) ) r=(r[1], r[0])#IMPORTANT!!! Reverse these arc ends because otherwise you get # a twisted ribbon #append the ribbon shape layout['shapes'].append(make_ribbon(l, r , 'rgb(255,175,175)', ribbon_color[k][j])) ``` The outputs for the variables are as follows: ``` texti = (u'Sociology appears on 79 of the same grants as Tools, technologies & methods',) textf = (u'Tools, technologies & methods appears on 79 of the same grants as Sociology',) ribbon_info = [{'hoverinfo': 'text', 'marker': {'color': 'rgba(214, 248, 149, 0.65)', 'size': 0.5}, 'mode': 'markers', 'text': (u'Demography appears on 51 of the same grants as Social policy',), 'type': 'scatter', 'x': 0.89904409911342476, 'y': 0.04146936036799545}, {'hoverinfo': 'text', 'marker': {'color': 'rgba(214, 248, 149, 0.65)', 'size': 0.5}, 'mode': 'markers', 'text': (u'Social policy appears on 51 of the same grants as Demography',), 'type': 'scatter', 'x': -0.65713108202353809, 'y': -0.61496238993825791},..................**etc** sigma = array([ 0, 14, 12, 10, 9, 7, 8, 5, 4, 3, 2, 1, 6, 16, 13, 11, 15], dtype=int64) ``` The code after the previous block which builds the chord diagram is as follows: ``` ideograms=[] for k in range(len(ideo_ends)): z= make_ideogram_arc(1.1, ideo_ends[k]) zi=make_ideogram_arc(1.0, ideo_ends[k]) m=len(z) n=len(zi) ideograms.append(Scatter(x=z.real, y=z.imag, mode='lines', line=Line(color=ideo_colors[k], shape='spline', width=0), text=labels[k]+'<br>'+'{:d}'.format(row_sum[k]), hoverinfo='text' ) ) path='M ' for s in range(m): path+=str(z.real[s])+', '+str(z.imag[s])+' L ' Zi=np.array(zi.tolist()[::-1]) for s in range(m): path+=str(Zi.real[s])+', '+str(Zi.imag[s])+' L ' path+=str(z.real[0])+' ,'+str(z.imag[0]) layout['shapes'].append(make_ideo_shape(path,'rgb(150,150,150)' , ideo_colors[k])) data = Data(ideograms+ribbon_info) fig=Figure(data=data, layout=layout) plotly.offline.iplot(fig, filename='chord-diagram-Fb') ``` This is the only hoverinfo that shows, the outside labels, not the ones just slightly more inside: [![enter image description here](https://i.stack.imgur.com/mqzcG.png)](https://i.stack.imgur.com/mqzcG.png) Using the example from the link at the start of my question. They have two sets of labels. On my example, the equivalent of 'Isabelle has commented on 32 of Sophia....' is not showing. [![enter image description here](https://i.stack.imgur.com/KgF8D.png)](https://i.stack.imgur.com/KgF8D.png)
2017/09/13
[ "https://Stackoverflow.com/questions/46191793", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6593031/" ]
just apply `json.dumps()` to this native python dictionary composed in one-line: ``` {k.replace(" ","_"):v.strip() for k,v in (x.split(":") for x in ["Passanger status:\n passanger cfg086d96 is unknown\n\n"])} ``` the inner generator comprehension avoids to call `split` for each part of the dict key/value. The value is stripped to remove trailing/leading blank spaces. The space characters in the key are replaced by underscores. result is (as dict): ``` {'Passanger status': 'passanger cfg086d96 is unknown'} ``` as json string using `indent` to generate newlines: ``` >>> print(json.dumps({k.replace(" ","_"):v.strip() for k,v in (x.split(":") for x in ["Passanger status:\n passanger cfg086d96 is unknown\n\n"])},indent=2)) { "Passanger_status": "passanger cfg086d96 is unknown" } ```
You can try this one also ``` data_dic = dict() data = "Passanger status:\n passanger cfg086d96 is unknown\n\n" x1 , x2 = map(str,data.split(":")) data_dic[x1] = x2 print data_dic ``` If you find it simple Output : ``` {'Passanger status': '\n passanger cfg086d96 is unknown\n\n'} ``` and for space to underscore you can use replace method in keys of the dictionary.
2,356
73,935,930
How (in python) can I change numbers to be going up. For example, 1 (time.sleep(0.05)) then it changes to two, and so on. But there will be text already above it, so you can't use a simple `os.system('clear')` So like this: > > print("how much money do you want to make?")< > 'number going up without deleting the "how much money" part' > > >
2022/10/03
[ "https://Stackoverflow.com/questions/73935930", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20149657/" ]
Like this: ``` import sys import time for i in range(10): time.sleep(0.3) sys.stdout.write("\rDoing thing %i" % i) sys.stdout.flush() ``` Edit: This was taken from [Replace console output in Python](https://stackoverflow.com/questions/6169217/replace-console-output-in-python)
The question is very unclear, but maybe you mean the following: ```py import time for item in [0.05,2,3]: time.sleep(item) ``` and ```py number = 3 print("how much money do you want to make? {}".format(number)) ```
2,357
36,115,429
I faced a compile error in my python script as following: ``` formula = "ASD" start = 0 end = 2 print(formula, start, end, type(start), type(end)) print(formula[start, end]) ``` the output is: ``` ASD 0 2 <class 'int'> <class 'int'> Traceback (most recent call last): File "test.py", line 5, in <module> print(formula[start, end]) TypeError: string indices must be integers ``` But start, end is int, so strange!
2016/03/20
[ "https://Stackoverflow.com/questions/36115429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3001445/" ]
The syntax to slice is with `:` not with `,` ``` >>> print(formula[start:end]) AS ```
You seem to be performing a slicing operation, in order to do this you need to use `:` and not `,`: ``` formula[start:end] ``` Demo: ``` formula = "ASD" start = 0 end = 2 print(formula, start, end, type(start), type(end)) print(formula[start:end]) ``` output: ``` ASD 0 2 <class 'int'> <class 'int'> AS ```
2,359
17,528,976
I am working on an anonymizer program which sensors the given words in the list. This is what i have so far. I am new to python so not sure how can i achieve this. ``` def isAlpha(c): if( c >= 'A' and c <='Z' or c >= 'a' and c <='z' or c >= '0' and c <='9'): return True else: return False def main(): message = [] userInput = str(input("Enter The Sentense: ")) truncatedInput = userInput[:140] for i in range(len(truncatedInput)): if(truncatedInput[i] == 'DRAT'): truncatedInput[i] = 'x' print(truncatedInput[i]) ``` this is the output i get ``` Enter The Sentense: DRAT D R A T ``` I want the word to be replaced by XXXX
2013/07/08
[ "https://Stackoverflow.com/questions/17528976", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2559375/" ]
You have several problems with your code: 1. There already exists an `islpha` function; it is a `str` method (see example below). 2. Your `trucatedInput` is a `str`, which is an immutable type. You can't reassign parts of an immutable type; i.e. `myStr[3]='x'` would normally fail. If you really want to do this, you're better off representing your truncated input as a list and using `''.join(truncatedInput)` to turn it into a string later. 3. You are currently looking at the characters in your truncated input to check if any of them equals `'DRAT'`. This is what your first for-loop in `main` does. However, what you seem to want is to iterate over the words themselves - you will need a "chunker" for this. This is a slightly difficult problem if you want to deal with free-form English. For example, a simple word chunker would simply split your sentence on spaces. However, what happens when you have a sentence containing the word "DRAT'S"? Due to such cases, you will be forced to create a proper chunker to deal with punctuations as required. This is a fairly high-level design decision. You may want to take a look at [`NLTK`](http://nltk.org/) to see if any of its chunkers will help you out. **Examples**: `str.isalpha` ``` In [3]: myStr = 'abc45d' In [4]: for char in myStr: ...: print char, char.isalpha() ...: a True b True c True 4 False 5 False d True ``` strings are immutable ``` In [5]: myStr[3] = 'x' --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-5-bf15aed01ea1> in <module>() ----> 1 myStr[3] = 'x' TypeError: 'str' object does not support item assignment ``` Finally, as others have recommended, you're likely much better off using `str.replace` anyways. However, be wary of replacing substrings of non-cencored words. For example, the substring "hell" in the word "hello" does not need to be censored. To accommodate for such text, you may want to use [`re.sub`](http://docs.python.org/2/library/re.html#re.sub), a regex substitution, as opposed to `str.replace`. One additional note, python allows for transitive comparisons. So you can shorten `if( c >= 'A' and c <='Z' or c >= 'a' and c <='z' or c >= '0' and c <='9')` into `if( 'Z' >= c >= 'A' or 'z' >= c >= 'a' or '9' >= c >= '0')`. This, by the way, can be replaced with `if c.isalpha() or c.isdigit()` Hope this helps
You could use [string.replace()](http://docs.python.org/2/library/string.html#string.replace) ``` truncatedInput.replace('DRAT', 'xxxx') ``` This will replace the first occurence of DRAT with xxxx, even if it is part of a longer sentence. If you want different functionality let me know.
2,361
67,756,936
I have this: ```py def f(message): l = [] for c in message: l.append(c) l.append('*') return "".join(l) ``` It works but how do I make it so that it doesn't add "\*" at the end. I only want it to be between the inputted word. I'm new to python and was just trying new things.
2021/05/30
[ "https://Stackoverflow.com/questions/67756936", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16072743/" ]
May be you can try this. It uses list comprehension ``` input_str = 'dog' def f(x): return '*'.join(x) print(f('dog')) #ouput d*o*g print(f(input_str)) #ouput d*o*g ```
Well, technically you could just slice the returned string cutting off the last astrix. ``` message="dog" def f(message): l = [] for c in message: l.append(c) l.append('*') return "".join(l[:-1]) print(f(message)) ``` this way it returns ``` d*o*g ```
2,363
24,736,813
I want to extend the datetime.date class adding it an attribute called `status` that represents if the date is a work day, an administrative non-work day, courts closed day,... I've read from [How to extend a class in python?](https://stackoverflow.com/questions/15526858/how-to-extend-a-class-in-python), [How to extend Python class init](https://stackoverflow.com/questions/12701206/how-to-extend-python-class-init) and [Chain-calling parent constructors in python](https://stackoverflow.com/questions/904036/chain-calling-parent-constructors-in-python), but I don't understand it well, so I'm noob with OOP. ``` >>> import datetime >>> class Fecha(datetime.date): def __init__(self, year, month, day, status): super(Fecha, self).__init__(self, year, month, day) self.status = status >>> dia = Fecha(2014, 7, 14, 'laborable') Traceback (most recent call last): File "<pyshell#35>", line 1, in <module> dia = Fecha(2014, 7, 14, 'laborable') TypeError: function takes at most 3 arguments (4 given) >>> ```
2014/07/14
[ "https://Stackoverflow.com/questions/24736813", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3160820/" ]
`datetime.date` is an immutable type, meaning you need to override the [`__new__` method](https://docs.python.org/3/reference/datamodel.html#object.__new__) instead: ``` class Fecha(datetime.date): def __new__(cls, year, month, day, status): instance = super(Fecha, cls).__new__(cls, year, month, day) instance.status = status return instance ``` Demo: ``` >>> import datetime >>> class Fecha(datetime.date): ... def __new__(cls, year, month, day, status): ... instance = super(Fecha, cls).__new__(cls, year, month, day) ... instance.status = status ... return instance ... >>> dia = Fecha(2014, 7, 14, 'laborable') >>> dia.status 'laborable' ```
problem is in super call ``` super(Fecha, self).__init__(year, month, day) ``` Try this.
2,364
55,739,404
I have a Python 3.6 script that calls out to a third-party tool using subprocess. `main_script.py:` ``` #!/usr/bin/env python import subprocess result = subprocess.run(['third-party-tool', '-arg1'], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) ``` The problem is, `main_script.py` must be run from within a virtual environment, and `third-party-tool` must be run from no virtual environment whatsoever. I don't know much about `third-party-tool`, except that it is on my path. Calling it while I have a virtual environment active causes it to jam up and throw an exception later on. I do not know if it uses the default python binary or it it spins up its own virtual env and does stuff in there. It is not a Python script, but apparently calls one somehow. *How do I tell subprocess to drop out of my virtual environment and run the command in the default shell environment?* I've examined a couple of similar questions: * [Running subprocess within different virtualenv with python](https://stackoverflow.com/questions/8052926/running-subprocess-within-different-virtualenv-with-python) -- In the first case, they are specifying the environment by calling their script with a specific version of Python. `third-party-tool` is not a Python script (I believe it's bash). * [Python subprocess/Popen with a modified environment](https://stackoverflow.com/questions/2231227/python-subprocess-popen-with-a-modified-environment) -- In the second case, they're tweaking existing environment variables, which looks promising, but I want to reset to the default environment, and I'm not sure if I necessarily know which variables I need to clear or reset.
2019/04/18
[ "https://Stackoverflow.com/questions/55739404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3638628/" ]
From the documentation of subprocess: <https://docs.python.org/3/library/subprocess.html> The accepted args are ``` subprocess.run(args, *, stdin=None, input=None, stdout=None, stderr=None, capture_output=False, shell=False, cwd=None, timeout=None, check=False, encoding=None, errors=None, text=None, env=None, universal_newlines=None) ``` In particular, > > If env is not None, it must be a mapping that defines the environment variables for the new process; these are used instead of the default behavior of inheriting the current process’ environment. It is passed directly to Popen. > > > Thus, passing an empty dictionary `env={}` (start with empty environment) and using `bash --login` (run as login shell, which reads env defaults) should do the trick. ``` subprocess.run(['bash', '--login', '-c', '/full/path/to/third-party-tool', '-arg1'], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env={}) ```
Thanks for your help, nullUser; your solution is a concise and correct answer to my question. However, when I tried it out, my third-party-tool now fails for some other (unknown) reason. There was probably some other environment variable I don't know about that's getting lost with the new shell. Fortunately, I found an alternate solution which I'll share for anyone else struggling. My Solution ----------- *As far as I can tell, the only difference that entering the virtual environment does to my environment is add a new path to my PATH variable, and add the variable VIRTUAL\_ENV.* I can replicate the outside-virtual-environment behavior by creating a copy of my environment where I: * delete that VIRTUAL\_ENV environment variable and * remove the python prefix from PATH. Example ------- ### my\_script.py `my_script.py` Implements my solution: ``` #!/usr/bin/env python import subprocess, os, sys env = os.environ.copy() if hasattr(sys,'real_prefix'): # If in virtual environment, gotta forge a copy of the environment, where we: # Delete the VIRTUAL_ENV variable. del(env['VIRTUAL_ENV']) # Delete the "/home/me/.python_venv/main/bin:" from the front of my PATH variable. orig_path = env['PATH'] virtual_env_prefix = sys.prefix + '/bin:' env['PATH'] = orig_path.replace(virtual_env_prefix, '') # Pass the environment into the third party tool, modified if and when required. subprocess.run(['./third-party-tool'], shell=False, env=env) ``` ### third-party-tool `third-party-tool` is mocked out as a script that tells you if it's in a virtual environment and prints out environment variables. In this example, `third-party-tool` is a Python script but in general it might not be. ``` #!/usr/bin/env python # third-party-tool import sys, os in_venv = hasattr(sys, 'real_prefix') print('This is third-party Tool and you {} in a virtual environment.'.format("ARE" if in_venv else "ARE NOT")) os.system('env') ``` ### Testing Now I try calling third-party-tool from outside virtual environment, inside virtual environment, and from the python script in the virtual environment, capturing the output. ``` [me@host ~]$ ./third-party-tool > without_venv.txt # Now I activate virtual environment (main) [me@host ~]$ ./third-party-tool > within_venv.txt (main) [me@host ~]$ ./my_script.py > within_venv_from_python.txt ``` Note: the outputs looks like this: This is third-party Tool and you ARE NOT in a virtual environment. (Proceeds to pring out a list of KEY=VALUE environment variables) I use my favorite diff tool and compare the outputs. `within_venv_from_python.txt` is identical to `without_venv.txt`, which is a good sign (in both cases, `third-party-tool` runs with same environment variables, and indicates it is not living in the matrix). After implementing this solution, my actual third-party-tool appears to be working.
2,365
13,661,723
How can I run online python code that owns/requires a set of modules? (e.g. numpy, matplotlib) Answers/suggestions to questions [2737539](https://stackoverflow.com/questions/2737539/python-3-online-interpreter-shell) and [3356390](https://stackoverflow.com/questions/3356390/is-there-an-online-interpreter-for-python-3) about interpreters in python 3, are not useful because those compilers don't work properly in this case.
2012/12/01
[ "https://Stackoverflow.com/questions/13661723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I found one that supports multiple modules, i checked `numpy, scipy, psutil, matplotlib, etc` and all of them are supported. Check out pythonanyware compiler, a sample console is [here](https://www.pythonanywhere.com/try-ipython/), however you can signup for accounts [here](https://www.pythonanywhere.com/pricing/), i believe there is a free version. I remember i used that that online compiler last year and it worked quite well, but for a free account it has certain limits. It also has a bash console, which allows you to run the python files.
You may try this as sandbox, it support numpy as well: <http://ideone.com>
2,366
46,027,022
I need to create a script that calculates the distance between two coordinates. The issue I'm having though is when I assign the coordinate to object one, it is stored as a string and am unable to convert it to a list or integer/float. How can I convert this into either a list or integer/float? The script and error I get is below. --- Script: ``` one=input("Enter an x,y coordinate.") Enter an x,y coordinate. 1,7 int(1,7) Traceback (most recent call last): File "<ipython-input-76-67de81c91c02>", line 1, in <module> int(1,7) TypeError: int() can't convert non-string with explicit base ```
2017/09/03
[ "https://Stackoverflow.com/questions/46027022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5874828/" ]
You have to convert the entered string to int/float by first splitting the string into the point components, then casting to the appropriate type: ``` x, y = map(float, one.split(',')) ``` To keep the entered values as a single custom datatype, named `Point` for example, you can use a [`namedtuple`](https://docs.python.org/3/library/collections.html#collections.namedtuple): ``` from collections import namedtuple Point = namedtuple('Point', 'x, y') ``` --- Demo: ``` >>> from collections import namedtuple >>> Point = namedtuple('Point', 'x, y') >>> Point(*map(float, '1, 2'.split(','))) Point(x=1.0, y=2.0) >>> _.x, _.y (1.0, 2.0) ```
Convert the input into the specific type as int or float Into a list: ``` _list = list(map(int, input("Enter an x,y coordinate.").split(","))) ``` or into variables: ``` a, b = map(int, input("Enter an x,y coordinate.").split(",")) ```
2,367
22,225,666
Suppose you want to write a function which yields a list of objects, and you know in advance the length `n` of such list. In python the list supports indexed access in O(1), so it is arguably a good idea to pre-allocate the list and access it with indexes instead of allocating an empty list and using the `append()` method. This is because we avoid the burden of expanding the whole list if the space is not enough. If I'm using python, probably performances are not that relevant in any case, but what is the better way of pre-allocating a list? I know these possible candidates: * `[None] * n` → allocating two lists * `[None for x in range(n)]` — or `xrange` in python2 → building another object Is one significantly better than the other? What if we are in the case `n = len(input)`? Since `input` exists already, would `[None for x in input]` have better performances w.r.t. `[None] * len(input)`?
2014/03/06
[ "https://Stackoverflow.com/questions/22225666", "https://Stackoverflow.com", "https://Stackoverflow.com/users/282614/" ]
In between those two options the first one is clearly better as no Python for loop is involved. ``` >>> %timeit [None] * 100 1000000 loops, best of 3: 469 ns per loop >>> %timeit [None for x in range(100)] 100000 loops, best of 3: 4.8 us per loop ``` **Update:** And `list.append` has an [`O(1)` complexity](https://wiki.python.org/moin/TimeComplexity#list) too, it might be a better choice than pre-creating list if you assign the `list.append` method to a variable. ``` >>> n = 10**3 >>> %%timeit lis = [None]*n for _ in range(n): lis[_] = _ ... 10000 loops, best of 3: 73.2 us per loop >>> %%timeit lis = [] for _ in range(n): lis.append(_) ... 10000 loops, best of 3: 92.2 us per loop >>> %%timeit lis = [];app = lis.append for _ in range(n): app(_) ... 10000 loops, best of 3: 59.4 us per loop >>> n = 10**6 >>> %%timeit lis = [None]*n for _ in range(n): lis[_] = _ ... 10 loops, best of 3: 106 ms per loop >>> %%timeit lis = [] for _ in range(n): lis.append(_) ... 10 loops, best of 3: 122 ms per loop >>> %%timeit lis = [];app = lis.append for _ in range(n): app(_) ... 10 loops, best of 3: 91.8 ms per loop ```
When you append an item to a list, Python 'over-allocates', see the [source-code](http://svn.python.org/projects/python/trunk/Objects/listobject.c) of the list object. This means that for example when adding 1 item to a list of 8 items, it actually makes room for 8 new items, and uses only the first one of those. The next 7 appends are then 'for free'. In many languages (e.g. old versions of Matlab, the newer JIT might be better) you are always told that you need to pre-allocate your vectors, since appending during a loop is very expensive. In the worst case, appending of a single item to a list of length `n` can cost `O(n)` time, since you might have to create a bigger list and copy all the existing items over. If you need to do this on every iteration, the overall cost of adding `n` items is `O(n^2)`, ouch. Python's pre-allocation scheme spreads the cost of growing the array over many single appends (see [amortized costs](http://en.wikipedia.org/wiki/Amortized_analysis)), effectively making the cost of a single append `O(1)` and the overall cost of adding `n` items `O(n)`. Additionally, the overhead of the rest of your Python code is usually so large, that the tiny speedup that can be obtained by pre-allocating is insignificant. So in most cases, simply forget about pre-allocating, unless your profiler tells you that appending to a list is a bottleneck. The other answers show some profiling of the list preallocation itself, but this is useless. The only thing that matters is profiling your complete code, with all your calculations inside your loop, with and without pre-allocation. If my prediction is right, the difference is so small that the computation time you win is dwarfed by the time spent thinking about, writing and maintaining the extra lines to pre-allocate your list.
2,369
37,254,610
ipdb is triggering an import error for me when I run my Django site locally. I'm working on Python 2.7 and within a virtual environment. `which ipdb` shows the path `(/usr/local/bin/ipdb)`, as does `which ipython`, which surprised me since I thought it should show my venv path (but shouldn't it work if it's global, anyway?). So I tried `pip install --target=/path/to/venv ipdb` and now it shows up in `pip freeze` (which it didn't before) but still gives me an import error. `which pip` gives `/Users/myname/.virtualenvs/myenv/bin/pip/` My path: `/Users/myname/.virtualenvs/myenv/bin:/Users/myname/.venvburrito/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/myname/bin:/usr/local/bin` Sys.path: `'/Users/myname/Dropbox/myenv', '/Users/myname/.venvburrito/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg', '/Users/myname/.venvburrito/lib/python2.7/site-packages', '/Users/myname/.venvburrito/lib/python2.7/site-packages/setuptools-8.2-py2.7.egg', '/Users/myname/.virtualenvs/myenv/lib/python27.zip', '/Users/myname/.virtualenvs/myenv/lib/python2.7', '/Users/myname/.virtualenvs/myenv/lib/python2.7/plat-darwin', '/Users/myname/.virtualenvs/myenv/lib/python2.7/plat-mac', '/Users/myname/.virtualenvs/myenv/lib/python2.7/plat-mac/lib-scriptpackages', '/Users/myname/.virtualenvs/myenv/Extras/lib/python', '/Users/myname/.virtualenvs/myenv/lib/python2.7/lib-tk', '/Users/myname/.virtualenvs/myenv/lib/python2.7/lib-old', '/Users/myname/.virtualenvs/myenv/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/Users/myname/.virtualenvs/myenv/lib/python2.7/site-packages']` If I run ipdb from the terminal, it works fine. I've tried restarting my terminal. Stacktrace: ``` Traceback (most recent call last): File "/Users/myname/.virtualenvs/myenv/lib/python2.7/site-packages/django/core/handlers/base.py", line 149, in get_response response = self.process_exception_by_middleware(e, request) File "/Users/myname/.virtualenvs/myenv/lib/python2.7/site-packages/django/core/handlers/base.py", line 147, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/myname/.virtualenvs/myenv/lib/python2.7/site-packages/django/views/generic/base.py", line 68, in view return self.dispatch(request, *args, **kwargs) File "/Users/myname/.virtualenvs/myenv/lib/python2.7/site-packages/django/views/generic/base.py", line 88, in dispatch return handler(request, *args, **kwargs) File "/Users/myname/.virtualenvs/myenv/lib/python2.7/site-packages/django/views/generic/base.py", line 157, in get context = self.get_context_data(**kwargs) File "/Users/myname/Dropbox/blog/views.py", line 22, in get_context_data import ipdb; ipdb.set_trace() ImportError: No module named ipdb ```
2016/05/16
[ "https://Stackoverflow.com/questions/37254610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1695507/" ]
You have Code like below. What ever you don't need just remove it. ``` String address = addresses.get(0).getAddressLine(0); String city = addresses.get(0).getLocality(); String state = addresses.get(0).getAdminArea(); String country = addresses.get(0).getCountryName(); String postalCode = addresses.get(0).getPostalCode(); ```
You appears to be using the Javascript version of the Google Places API. Let me know if I've guessed incorrectly! All you need to do is add `&region=US` when you load the Google Maps API. E.g.: ``` <script src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&libraries=places&region=US"> ``` Note that this will still show the country for Autocomplete predictions outside of the United States (which helps disambiguate "Paris, TX" from "Paris, France"). See the section on [Region localization](https://developers.google.com/maps/documentation/javascript/localization#Region) in the Google Maps APIs documentation for more details on this parameter. If you need even more control over the appearance, you can use the [`AutocompleteService` programmatic API](https://developers.google.com/maps/documentation/javascript/places-autocomplete#place_autocomplete_service) to build your own UI, rather than using the `Autocomplete` control.
2,372
63,012,839
I'm looking for a fast way to fill a QTableModel with over 10000 rows of data in python. Iterating over the items in a double for-loop takes over 40 seconds.
2020/07/21
[ "https://Stackoverflow.com/questions/63012839", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6796677/" ]
You don't need to explicitly add items to a QTableModel, you can build your own model around an existing data structure like a list of lists or a numpy array like below. ``` from PyQt5 import QtWidgets, QtCore, QtGui import sys from PyQt5.QtCore import QModelIndex, Qt import numpy as np class MyTableModel(QtCore.QAbstractTableModel): def __init__(self, data=[[]], parent=None): super().__init__(parent) self.data = data def headerData(self, section: int, orientation: Qt.Orientation, role: int): if role == QtCore.Qt.DisplayRole: if orientation == Qt.Horizontal: return "Column " + str(section) else: return "Row " + str(section) def columnCount(self, parent=None): return len(self.data[0]) def rowCount(self, parent=None): return len(self.data) def data(self, index: QModelIndex, role: int): if role == QtCore.Qt.DisplayRole: row = index.row() col = index.column() return str(self.data[row][col]) if __name__ == '__main__': app = QtWidgets.QApplication(sys.argv) # data = [[11, 12, 13, 14, 15], # [21, 22, 23, 24, 25], # [31, 32, 33, 34, 35]] data = np.random.random((10000, 100)) * 100 model = MyTableModel(data) view = QtWidgets.QTableView() view.setModel(model) view.show() sys.exit(app.exec_()) ```
I would recommend creating a numpy array of QStandardItem and filling the Model using the appendColumn function: ``` start = time.time() data = np.empty(rows, cols, dtype=object) # generate empty data-Array #### Fill the data array with strings here ### items = np.vectorize(QStandardItem)(data) # generate QStandardItem-Array print(time.time() - start, "seconds to create items") start = time.time() # iterate over columns (because we have segneficantly less columns than rows) for i in range(len(cols)): self.myQTableModel.appendColumn(items[:,i]) self.myQTableModel.setHorizontalHeaderLabels(headerarray) # set headers print(time.time()-start, "seconds to load DB") ``` result for 16000 rows and 7 cols: ``` 0.346372127532959 seconds to create items 1.1745991706848145 seconds to load DB ```
2,373
7,243,364
Well, probably a strange question, I know. But searching google for python and braces gives only one type of answers. What I want to as is something low-level and, probably, not very pythonic. Is there a clear way to write a function working with: ``` >>>my_function arg1, arg2 ``` instead of ``` >>>my_function(arg1, arg2) ``` ? I search a way to make function work like an old print (in Python <3.0), where you don't need to use parentheses. If that's not so simple, is there a way to see the code for "print"?
2011/08/30
[ "https://Stackoverflow.com/questions/7243364", "https://Stackoverflow.com", "https://Stackoverflow.com/users/581732/" ]
You can do that sort of thing in Ruby, but you can't in Python. Python values clean language and explicit and obvious structure. > > >>> import this > > The Zen of Python, by Tim Peters > > > Beautiful is better than ugly. > > **Explicit is better than implicit.** > > Simple is better than complex. > > Complex is better than complicated. > > Flat is better than nested. > > Sparse is better than dense. > > Readability counts. > > Special cases aren't special enough to break the rules. > > Although practicality beats purity. > > Errors should never pass silently. > > Unless explicitly silenced. > > In the face of ambiguity, refuse the temptation to guess. > > There should be one-- and preferably only one --obvious way to do it. > > Although that way may not be obvious at first unless you're Dutch. > > Now is better than never. > > Although never is often better than \*right\* now. > > If the implementation is hard to explain, it's a bad idea. > > If the implementation is easy to explain, it may be a good idea. > > Namespaces are one honking great idea -- let's do more of those! > > >
The requirement for braces lies in the Python interpreter and not in the code for the `print` method (or any other method) itself. (And as eph points out in the comments, `print` is a statement not a method.)
2,374
48,247,921
I'm attempting to get the TensorFlow Object Detection API <https://github.com/tensorflow/models/tree/master/research/object_detection> working on Windows by following the install instructions <https://github.com/tensorflow/models/tree/master/research/object_detection> Which seem to be for Linux/Mac. I can only get this to work if I put a script in the directory I cloned the above repo to. If I put the script in any other directory I get this error: ``` ModuleNotFoundError: No module named 'utils' ``` I suspect that the cause is not properly doing the Windows equivalent of this command listed on the install instructions above: ``` # From tensorflow/models/research/ export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim ``` I'm using Windows 10, Python 3.6, and TensorFlow 1.4.0 if that matters. Of course, I've Googled on this concern and found various links, for example, this: <https://github.com/tensorflow/models/issues/1747> But this has not resolved the concern. Any suggestions on how to resolve this? Here are the steps I've done so far specifically: --- EDIT: these steps work now after updating to incorporate RecencyEffect's answer ------------------------------------------------------------------------------- 1) Install TensorFlow and related tools via pip3 2) From an administrative command prompt, run the following: ``` pip3 install pillow pip3 install lxml pip3 install jupyter pip3 install matplotlib ``` 3) Clone the TensorFlow "models" repository to the Documents folder, in my case ``` C:\Users\cdahms\Documents\models ``` 4) Downloaded Google Protobuf <https://github.com/google/protobuf> Windows v3.4.0 release "protoc-3.4.0-win32.zip" (I tried the most current 3.5.1 and got errors on the subsequent steps, so I tried 3.4.0 per this vid <https://www.youtube.com/watch?v=COlbP62-B-U&list=PLQVvvaa0QuDcNK5GeCQnxYnSSaar2tpku&index=1> and the protobuf compile worked) 5) Extract the Protobuf download to Program Files, specifically ``` "C:\Program Files\protoc-3.4.0-win32" ``` 6) CD into the models\research directory, specifically ``` cd C:\Users\cdahms\Documents\models\research ``` 7) Executed the protobuf compile, specifically ``` “C:\Program Files\protoc-3.4.0-win32\bin\protoc.exe” object_detection/protos/*.proto --python_out=. ``` Navigate to: ``` C:\Users\cdahms\Documents\models\research\object_detection\protos ``` and verify the .py files were created successfully as a result of the compile (only the .proto files were there to begin with) 8) cd to the object\_detection directory, ex: ``` cd C:\Users\cdahms\Documents\models\research\object_detection ``` then enter the following at a command prompt to start the object\_detection\_tutorial.ipynb Jupyter Notebook ``` jupyter notebook ``` 9) In the Jupyter Notebook, choose "object\_detection\_tutorial.ipynb" -> Cell -> Run all, the example should run within the notebook 10) In the Jupyter Notebook, choose “File” -> “Download As” -> “Python”, and save the .py version of the notebook to the same directory, i.e. ``` C:\Users\cdahms\Documents\models\research\object_detection\object_detection_tutorial.py ``` You can now open the script in your chosen Python editor (ex. PyCharm) and run it. --- EDIT per RecencyEffect's answer below, if you follow these additional steps you will be able to run the object\_detection\_tutorial.py script from any directory ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 11) Move the script to any other directory, then attempt to run it and you will find you will get the error: ``` ModuleNotFoundError: No module named 'utils' ``` because we have not yet informed Python how to find the utils directory that these lines use: ``` from utils import label_map_util from utils import visualization_utils as vis_util ``` To resolve this . . . 12) Go to System -> Advanced system settings -> Environment Variables . . . -> New, and add a variable with the name PYTHONPATH and these values: [![enter image description here](https://i.stack.imgur.com/vL13P.png)](https://i.stack.imgur.com/vL13P.png) 13) Also under Environment Variables, edit PATH and add %PYTHONPATH% like so: [![enter image description here](https://i.stack.imgur.com/jdpY3.png)](https://i.stack.imgur.com/jdpY3.png) 14) Reboot to make sure these path changes take effect 15) Pull up a command prompt and run the command "set", verify PYTHONPATH is there and PYTHONPATH and PATH contained the values from the previous steps. 16) Now you can copy the "object\_detection\_tutorial.py" to any other directory and it will run
2018/01/14
[ "https://Stackoverflow.com/questions/48247921", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4835204/" ]
As mentioned in the comment, `utils` is a submodule so you actually need to add `object_detection` to `PYTHONPATH`, not `object_detection/utils`. I'm glad it worked for you.
cd Research/Object\_Detection cd .. Research 1. export PATH=~/anaconda3/bin:$PATH RESEARCH 2. git clone <https://github.com/tensorflow/models.git> RESEARCH 3.export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim 4.protoc object\_detection/protos/string\_int\_label\_map.proto --python\_out=. CD OBJECT\_DETECTION 5. protoc protos/string\_int\_label\_map.proto --python\_out=. 6.jupyter notebook
2,378
19,819,443
I'm writing a code in Python. Within the code, a blackbox application written in c++ is called. Sometimes this c++ application does not converge and an error message come up. This error does not terminate the Python code, but it pause the run. After clicking ok for the error message, the python code continues running till either the end of the code or the message comes up again. Is there a way to handle this problem within Python: the code detects the message and clicks ok? Thanks
2013/11/06
[ "https://Stackoverflow.com/questions/19819443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2961551/" ]
I believe that in your case Python program doesn't actually continue the execution, unless the program started as a subprocess completes - this is the behaviour or [subprocess.check\_call](http://docs.python.org/2/library/subprocess.html#subprocess.check_call) which you say is used to start the subprocess. As long as you start a subprocess with `check_call`, there is pretty much no way to find out the intermediate subprocess state, until it actually terminates and you get the exit code. So, instead you may use [subprocess.Popen()](http://docs.python.org/2/library/subprocess.html#subprocess.Popen) constructor to create a [Popen](http://docs.python.org/2/library/subprocess.html#popen-objects) object, which starts the subprocess, but doesn't wait for subprocess to complete. This way you can verify the subprocess state implicitly by checking its other outputs, if any hopefully exist for it (for example you may as well read subprocess output if you know it writes its errors or other messages to stdout or stderr). P.S. Better solution - fix the C++ program.
Timur is correct. Unless the C++ program explicitly provides a way for you to check the status, respond to the dialog, or make it run without showing the dialog, there is nothing built into python that can solve this problem as far as i know. There are some workarounds that might work for you, though. Depending on your platform, you could use a window manager extension module (like pywin32 or python-xlib) to search for the dialog box and programatically click the Ok. If you can use Jython, you can use [Sikuli](http://www.sikuli.org/), which is a very nice, easy to use visual automation package.
2,381
15,106,713
I've searched the databases and cookbooks but can't seem to find the right answer. I have a very simple python code which sums up self powers in a range. I need the last ten digits of this very, very large number and I've tried the getcontext().prec however I'm still hitting a limit. Here's the code: ``` def SelfPowers(n): total = 0 for i in range(1,n): total += (i**i) return(total) print SelfPowers(n) ``` How can I see all those beautiful numbers? It prints relatively fast on my quad-core. This is just for fun for ProjectEuler, Problem #48, no spoilers please I DO NOT WANT THE SOLUTION and I don't want the work done for me, so if you could point me in the right direction please? Thanks, mp
2013/02/27
[ "https://Stackoverflow.com/questions/15106713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2082350/" ]
If you want the *last ten digits* of a number, don't compute the whole thing (it will take too much memory and time). Instead, consider using the "three-argument" form of `pow` to compute powers mod a specific base, and you will find the problem is much easier.
Testing on Python 3.2 I was able to ``` print(SelfPowers(10000)) ``` though it took some seconds. How large a number were you thinking? **Edit:** It looks like you want to use `1000`? In such case, upgrade to Python 3 and you should be fine.
2,382
58,850,484
I want to save list below output onto a text file ``` with open("selectedProd.txt", 'w') as f: for x in myprod["prod"]: if x["type"]=="discount" or x["type"]=="normal" or x["type"]=="members" : f.write(x["name"],x["id"], x["price"]) ``` I'm getting error ``` f.write(x["name"],x["id"], x["price"]) TypeError: function takes exactly 1 argument (3 given) ``` Expected text file output as below ``` item1 111 2.00 item2 222 5.00 item3 444 1.00 item4 666 5.00 item5 212 7.00 ``` Please advise further. Thanks **Both solution below works one for python2.7 and python3.6 above**
2019/11/14
[ "https://Stackoverflow.com/questions/58850484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10411973/" ]
I think I find a solution, but If there something better pls, let me know... I add `this.dialogRef.closeAll()` ``` class UserEffects { constructor( private actions$: Actions, private dialogRef: MatDialog, private notificationService: NotificationService, ) {} @Effect() addNewUser$ = this.actions$.pipe( ofType(actions.UserActionTypes.addNewUser), mergeMap((user: actions.addNewUser) => this.userService.createUser(user.user).pipe( map(() => { new actions.LoadUsers(), this.notificationService.success("User added successfully!"); this.dialogRef.closeAll(); <--- //this is the key }), catchError(error => of(new actions.Error(error.error))) ) )); } ``` EDIT: modal is closed, but I get error > > core.js:6014 ERROR Error: Effect "UserEffects.addNewUser$" dispatched an invalid action: undefined > > > TypeError: Actions must be objects > > > Any help? Thnx
In the constructor of your `@Effect`, you need to provide the dependency: ``` private dialogRef: MatDialogRef<MyDialogComponentToClose> ``` And you need to import `MatDialogModule` inside your module where your effect is.
2,383
49,440,741
I have a python code base where I have refactored a module (file) into a package (directory) as the file was getting a bit large and unmanageable. However, I cannot get my unit tests running as desired with the new structure. I place my unit test files directly alongside the code it tests (this is a requirement and cannot change - no separate `test` directories): ``` app/ +-- app.py +-- config.py +-- config_test.py +-- model/ | +-- __init__.py | +-- base.py | +-- base_test.py | +-- square.py | +-- square_test.py +-- test.py +-- web.py +-- web_test.py ``` Previously, the `model` package was the `model.py` module with a `model_test.py` test suite. There is a top-level test runner - `test.py` and that works fine. It finds the test cases inside the `model` directory and runs them successfully (it uses the `discovery` feature of `unittest` - see end of post for `test.py`): ``` $ python test.py ``` However, I also want to be able to directly run the test cases in the `model` directory: ``` $ python model/base_test.py ``` This does not work, because the test is inside the package directory. The imports in the code fail because they are either not in a module when imported directly by the test suite or the search path is wrong. For instance, in `model/square.py`, I can import `base.py` in one of two ways: ``` from model import Base ``` or ``` from .base import Base ``` These both work fine when `model` is imported. But when inside the `model` test suites, I cannot import `square` because `square` cannot import `base`. `square_test.py` contains imports like: ``` import unittest from square import Square ... test cases ... if __name__ == '__main__': unittest.main() ``` For the first type of import in `square.py` (`from model import Base`), I get the error: ``` ModuleNotFoundError: No module named 'model' ``` Fair enough, `sys.path` has `/home/camh/app/model` and there is no `model` module in there. For the second type of import in `square.py` (`from .base import Base`), I get the error: ``` ImportError: attempted relative import with no known parent package ``` I cannot figure out how to do my imports that allows me to have tests alongside the unit-under-test and be directly runnable. I want directly runnable test suites as often I do not want to run the entire set of tests, but just target individual tests: ``` $ python model/square_test.py SquareTest.test_equal_sides ``` I cannot do that with my test runner because it just uses discovery to run all the tests and discovery is not compatible with specifying individual test suites or test functions. My test runner (`test.py`) is just: ``` import os, sys sys.argv += ['discover', os.path.dirname(sys.argv[0]), '*_test.py'] unittest.main(module=None) ```
2018/03/23
[ "https://Stackoverflow.com/questions/49440741", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23744/" ]
You can invoke the `unittest` module from the command line with arguments: ``` python -m unittest model.square_test ``` If you are using python3 you can use file names too: ``` python3 -m unittest model/square_test.py ```
suggestions: add `app/__init__.py`, and treat `app` as package instead of `model` one way is for all tests, using explicit `from app.model.square import Square` The relative import should be fine, as long as using `nosetests -vw .` in `app/` directory. These all under the price of removing `app/test.py` Another common mistake after re-factoring, is the `.pyc` not all removed and re-generated.
2,391
30,314,368
I have a CSV file that looks something like this: ``` 2014-6-06 08:03:19, 439105, 1053224, Front Entrance 2014-6-06 09:43:21, 439105, 1696241, Main Exit 2014-6-06 10:01:54, 1836139, 1593258, Back Archway 2014-6-06 11:34:26, 845646, external, Exit 2014-6-06 04:45:13, 1464748, 439105, Side Exit ``` I was wondering how to delete a line if it includes the word "external"? I saw another [post](https://stackoverflow.com/questions/21970932/remove-line-from-file-if-containing-word-from-another-txt-file-in-python-bash) on SO that addressed a very similar issue, but I don't understand completely... I tried to use something like this (as explained in the linked post): ``` TXT_file = 'whatYouWantRemoved.txt' CSV_file = 'comm-data-Fri.csv' OUT_file = 'OUTPUT.csv' ## From the TXT, create a list of domains you do not want to include in output with open(TXT_file, 'r') as txt: domain_to_be_removed_list = [] ## for each domain in the TXT ## remove the return character at the end of line ## and add the domain to list domains-to-be-removed list for domain in txt: domain = domain.rstrip() domain_to_be_removed_list.append(domain) with open(OUT_file, 'w') as outfile: with open(CSV_file, 'r') as csv: ## for each line in csv ## extract the csv domain for line in csv: csv_domain = line.split(',')[0] ## if csv domain is not in domains-to-be-removed list, ## then write that to outfile if (csv_domain not in domain_to_be_removed_list): outfile.write(line) ``` The text file just held the one word "external" but it didn't work.... and I don't understand why. What happens is that the program will run, and the output.txt will be generated, but nothing will change, and no lines with "external" are taken out. I'm using Windows and python 3.4 if it makes a difference. Sorry if this seems like a really simple question, but I'm new to python and any help in this area would be greatly appreciated, thanks!!
2015/05/18
[ "https://Stackoverflow.com/questions/30314368", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4573703/" ]
It looks like you are grabbing the first element after you split the line. That is going to give you the date, according to your example CSV file. What you probably want instead (again, assuming the example is the way it will always work) is to grab the 3rd element, so something like this: ``` csv_domain = line.split(',')[2] ``` But, like one of the comments said, this isn't necessarily fool proof. You are assuming none of the individual cells will have commas. Based on your example that might be a safe assumption, but in general when working with CSV files I recommend working with the [Python csv module](https://docs.python.org/3/library/csv.html).
if you can go with something else then python, grep would work like this: ``` grep file.csv "some regex" > newfile.csv ``` would give you ONLY the lines that match the regex, while: ``` grep -v file.csv "some regex" > newfile.csv ``` gives everything BUT the lines matching the regex
2,392
52,870,674
When I execute the following command I get the below error from Tensorflow "missing file or folder". I've checked all online solutions for this error, but nothing is resolving my error. `python generate_tfrecord.py --csv_input=images\train_labels.csv --image_dir=images\train --output_path=train.record` **The error:** > > > ``` > File "generate_tfrecord.py", line 110, in > tf.app.run() > File "C:\anaconda3\envs\tensorflowc\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run > _sys.exit(main(argv)) > File "generate_tfrecord.py", line 101, in main > tf_example = create_tf_example(group, path) > File "generate_tfrecord.py", line 56, in create_tf_example > encoded_jpg = fid.read() > File "C:\anaconda3\envs\tensorflowc\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 125, in read > self._preread_check() > File "C:\anaconda3\envs\tensorflowc\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 85, in _preread_check > compat.as_bytes(self.__name), 1024 * 512, status) > File "C:\anaconda3\envs\tensorflowc\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 519, in exit > c_api.TF_GetCode(self.status.status)) > tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: C:\tensorflowc\models\research\object_detection\images\train\tr1138a1a1_3_lar : The system cannot find the file specified. > ; No such file or directory > > ``` > >
2018/10/18
[ "https://Stackoverflow.com/questions/52870674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9933958/" ]
I resolved the problem If you are making `.CSV file` using a `xml_to_csv file.py`, you have to check the file extension such as .jpg, .png, .jpeg in `train_labels.csv` file. In my case, the xtension names won't be there ! [![enter image description here](https://i.stack.imgur.com/JtuNs.png)](https://i.stack.imgur.com/JtuNs.png) **Solution:** add the extensions like below example and run following command: ``` python generate_tfrecord.py --csv_input=images\train_labels.csv --image_dir=images\train --output_path=train.record ``` It will work !
My csv-file contained imagenames with jpg extension and I still had this error OP posted. I tried solving it with: ``` python3 generate_tf_record.py --csv_input=data/train_labels.csv --output_path=train.record python3 generate_tf_record.py --csv_input=data/test_labels.csv --output_path=test.record ``` All images were in one folder and flags as shown below: ``` flags.DEFINE_string('csv_input', '','data/train_labels.csv') flags.DEFINE_string('output_path','', 'train.record') flags.DEFINE_string('image_dir', '', 'images') ``` The problem was resolved when I copied the saved record file from the main folder to the data folder.
2,394
51,505,249
``` list = [1,2,,3,4,5,6,1,2,56,78,45,90,34] range = ["0-25","25-50","50-75","75-100"] ``` I am coding in python. I want to sort a list of integers in range of numbers and store them in differrent lists.How can i do it? I have specified my ranges in the the range list.
2018/07/24
[ "https://Stackoverflow.com/questions/51505249", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10033784/" ]
Create a dictionary with max-value of each *bin* as key. Iterate through your numbers and append them to the list that's the value of each *bin-key*: ``` l = [1,2,3,4,5,6,1,2,56,78,45,90,34] # your range covers 25 a piece - and share start/endvalues. # I presume [0-25[ ranges def inRanges(data,maxValues): """Sorts elements of data into bins that have a max-value. Max-values are given by the list maxValues which holds the exclusive upper bound of the bins.""" d = {k:[] for k in maxValues} # init all keys to empty lists for n in data: key = min(x for x in maxValues if x>n) # get key d[key].append(n) # add number return d sortEm = inRanges(l,[25,50,75,100]) print(sortEm) print([ x for x in sortEm.values()]) ``` Output: ``` {25: [1, 2, 3, 4, 5, 6, 1, 2], 50: [25, 45, 34], 75: [56], 100: [78, 90]} [[1, 2, 3, 4, 5, 6, 1, 2], [25, 45, 34], [56], [78, 90]] ```
Another stable bin approach for your special case (regular intervaled bins) would be to use a calculated key - this would get rid of the key-search in each step. Stable search means the order of numbers in the list is the same as in the input data: ``` def inRegularIntervals(data, interval): """Sorts elements of data into bins of regular sizes. The size of each bin is given by 'interval'.""" # init dict so keys are ordered - collection.defaultdict(list) # would be faster - but this works for lists of a couple of # thousand numbers if you have a quarter up to one second ... # if random key order is ok, shorten this to d = {} d = {k:[] for k in range(0, max(data), interval)} for n in data: key = n // interval # get key key *= interval d.setdefault(key, []) d[key ].append(n) # add number return d ``` Use on random data: ``` from random import choices data = choices(range(100), k = 50) data.append(135) # add a bigger value to see the gapped keys binned = inRegularIntervals(data, 25) print(binned) ``` Output (\n and spaces added): ``` { 0: [19, 9, 1, 0, 15, 22, 4, 9, 12, 7, 12, 9, 16, 2, 7], 25: [25, 31, 37, 45, 30, 48, 44, 44, 31, 39, 27, 36], 50: [50, 50, 58, 60, 70, 69, 53, 53, 67, 59, 52, 64], 75: [86, 93, 78, 93, 99, 98, 95, 75, 88, 82, 79], 100: [], 125: [135], } ``` --- To sort the binned lists in place, use ``` for k in binned: binned[k].sort() ``` to get: ``` { 0: [0, 1, 2, 4, 7, 7, 9, 9, 9, 12, 12, 15, 16, 19, 22], 25: [25, 27, 30, 31, 31, 36, 37, 39, 44, 44, 45, 48], 50: [50, 50, 52, 53, 53, 58, 59, 60, 64, 67, 69, 70], 75: [75, 78, 79, 82, 86, 88, 93, 93, 95, 98, 99], 100: [], 125: [135]} ```
2,397
30,513,482
I'm trying to export two overloaded functions to Python. So I first define the pointers to these functions and then I use them to expose the functions to Python. ``` BOOST_PYTHON_MODULE(mylib){ // First define pointers to overloaded function double (*expt_pseudopot02_v1)(double,double,double,const VECTOR&, int,int,int,double,const VECTOR&, int,int,int,double,const VECTOR& ) = &pseudopot02; boost::python::list (*expt_pseudopot02_v2)(double, double, double, const VECTOR&, int,int,int,double, const VECTOR&, int,int,int,double, const VECTOR&, int, int ) = &pseudopot02; // Now export def("pseudopot02", expt_pseudopot02_v1); // this works fine! //def("pseudopot02", expt_pseudopot02_v2); // this one gives the problem! } ``` The first export function works fine. The second (presently commented) fails, giving the error: ``` template argument deduction/substitution failed ``` it also prints this explanation: ``` ...../boost_1_50_0/boost/python/make_function.hpp:104:59: note: mismatched types ‘RT (ClassT::*)(T0, T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14)const volatile’ and ‘boost::python::list (*)(double, double, double, const VECTOR&, int, int, int, double, const VECTOR&, int, int, int, double, const VECTOR&, int, int)’ f,default_call_policies(), detail::get_signature(f)); ^ ``` which doesn't tell me much except for the general idea that there is something with function signature. So don't have any idea on the nature of the problem and hence how to fix it. It doesn't seem there was a similar problem discussed here either. Edit: Here I provide requested minimal,complete, verifiable code: In file libX.cpp ``` #include <boost/python.hpp> #include "PP.h" using namespace boost::python; #ifdef CYGWIN BOOST_PYTHON_MODULE(cygX){ #else BOOST_PYTHON_MODULE(libX){ #endif // This set will work! // double (*expt_PP_v1)(const VECTOR& ) = &PP; // boost::python::list (*expt_PP_v2)(const VECTOR&,int) = &PP; // This one - only the first function (returning double) // the function returning boost::python::list object causes the error double (*expt_PP_v1)(double,double,double,const VECTOR&,int,int,int,double,const VECTOR&,int,int,int,double,const VECTOR&) = &PP; boost::python::list (*expt_PP_v2)(double, double, double, const VECTOR&, int,int,int,double, const VECTOR&, int,int,int,double, const VECTOR&, int, int ) = &PP; def("PP", expt_PP_v1); def("PP", expt_PP_v2); } ``` File PP.h ``` #ifndef PP_H #define PP_H #include <boost/python.hpp> using namespace boost::python; class VECTOR{ public: double x,y,z; VECTOR(){ x = y = z = 0.0; } ~VECTOR(){ } VECTOR& operator=(const double &v){ x=y=z=v; return *this; } }; /* This set of functions will work double PP(const VECTOR& R, int is_derivs,VECTOR& dIdR ); boost::python::list PP(const VECTOR& R,int is_derivs); double PP(const VECTOR& R ); */ // The following - will not, only the one returning double double PP(double C0, double C2, double alp, const VECTOR& R, int nxa,int nya, int nza, double alp_a, const VECTOR& Ra, int nxb,int nyb, int nzb, double alp_b, const VECTOR& Rb, int is_normalize, int is_derivs, VECTOR& dIdR, VECTOR& dIdA, VECTOR& dIdB ); boost::python::list PP(double C0, double C2, double alp, const VECTOR& R, int nxa,int nya, int nza, double alp_a, const VECTOR& Ra, int nxb,int nyb, int nzb, double alp_b, const VECTOR& Rb, int is_normalize, int is_derivs ); double PP(double C0, double C2, double alp, const VECTOR& R, int nxa,int nya, int nza, double alp_a, const VECTOR& Ra, int nxb,int nyb, int nzb, double alp_b, const VECTOR& Rb ); #endif // PP_H ``` In file PP.cpp ``` #include "PP.h" /* This set will work double PP(const VECTOR& R, int is_derivs,VECTOR& dIdR ){ dIdR = 0.0; return 0.0; } boost::python::list PP(const VECTOR& R,int is_derivs){ VECTOR dIdR; double I = PP(R, is_derivs, dIdR); boost::python::list res; res.append(0.0); if(is_derivs){ res.append(dIdR); } return res; } double PP(const VECTOR& R ){ VECTOR dIdR; double res = PP(R, 0, dIdR); return res; } */ // The following functions will not always work double PP(double C0, double C2, double alp, const VECTOR& R, int nxa,int nya, int nza, double alp_a, const VECTOR& Ra, int nxb,int nyb, int nzb, double alp_b, const VECTOR& Rb, int is_normalize, int is_derivs, VECTOR& dIdR, VECTOR& dIdA, VECTOR& dIdB ){ dIdR = 0.0; dIdA = 0.0; dIdB = 0.0; return 0.0; } boost::python::list PP(double C0, double C2, double alp, const VECTOR& R, int nxa,int nya, int nza, double alp_a, const VECTOR& Ra, int nxb,int nyb, int nzb, double alp_b, const VECTOR& Rb, int is_normalize, int is_derivs ){ VECTOR dIdA, dIdR, dIdB; double I = PP(C0,C2,alp,R, nxa,nya,nza,alp_a,Ra, nxb,nyb,nzb,alp_b,Rb, is_normalize,is_derivs,dIdR,dIdA,dIdB); boost::python::list res; res.append(I); if(is_derivs){ res.append(dIdR); res.append(dIdA); res.append(dIdB); } return res; } double PP(double C0, double C2, double alp, const VECTOR& R, int nxa,int nya, int nza, double alp_a, const VECTOR& Ra, int nxb,int nyb, int nzb, double alp_b, const VECTOR& Rb ){ VECTOR dIdR,dIdA,dIdB; double res = PP(C0, C2, alp, R, nxa,nya,nza,alp_a,Ra, nxb,nyb,nzb,alp_b,Rb, 1, 0, dIdR, dIdA, dIdB); return res; } ``` So, it look to me that the recognition of template gets screwed when the number of parameters is large. I checked several times that signatures in libX.cpp, PP.cpp and PP.h match among themselves and do not overlap with those of overloaded functions. So, I'm still have no clue what is the source of the problem.
2015/05/28
[ "https://Stackoverflow.com/questions/30513482", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938720/" ]
In short, the functions being exposed exceed the default maximum arity of 15. As noted in the [configuration documentation](http://www.boost.org/doc/libs/1_58_0/libs/python/doc/v2/configuration.html), one can define `BOOST_PYTHON_MAX_ARITY` to control the maximum allowed arity of any function, member function, or constructor being wrapped and exposed through Boost.Python. In this particular case, one of the overloads has an arity of 16, so one could define the max arity before including `boost/python.hpp`: ```cpp #define BOOST_PYTHON_MAX_ARITY 16 #include <boost/python.hpp> ``` As of the time of this writing, Boost.Python (1.58) does not use C++11's variadic templates. Instead, if uses preprocessor macro expansions to provide template specializations and allows users to configure maximum arity through the `BOOST_PYTHON_MAX_ARITY` macro. --- Here is a complete minimal example [demonstrating](http://coliru.stacked-crooked.com/a/540f1b699890997f) increasing the max arity: ```cpp #define BOOST_PYTHON_MAX_ARITY 16 #include <boost/python.hpp> // Functions have 5 parameters per line. /// @brief Mockup spam function with 14 parameters. double spam( int, int, int, int, int, // 5 int, int, int, int, int, // 10 int, int, int, int // 14 ) { return 42; } /// @brief Mockup spam function with 16 parameters. boost::python::list spam( int, int, int, int, int, // 5 int, int, int, int, int, // 10 int, int, int, int, int, // 15 int // 16 ) { boost::python::list list; return list; } BOOST_PYTHON_MODULE(example) { namespace python = boost::python; double (*spam_14)( int, int, int, int, int, // 5 int, int, int, int, int, // 10 int, int, int, int // 14 ) = &spam; python::list (*spam_16)( int, int, int, int, int, // 5 int, int, int, int, int, // 10 int, int, int, int, int, // 15 int // 16 ) = &spam; python::def("spam", spam_14); python::def("spam", spam_16); } ``` Interactive usage: ```python >>> import example >>> assert 42 == example.spam(*range(14)) >>> assert isinstance(example.spam(*range(16)), list) >>> print example.spam.__doc__ spam( (int)arg1, (int)arg2, (int)arg3, (int)arg4, (int)arg5, (int)arg6, (int)arg7, (int)arg8, (int)arg9, (int)arg10, (int)arg11, (int)arg12, (int)arg13, (int)arg14) -> float : C++ signature : double spam(int,int,int,int,int, int,int,int,int,int, int,int,int,int) spam( (int)arg1, (int)arg2, (int)arg3, (int)arg4, (int)arg5, (int)arg6, (int)arg7, (int)arg8, (int)arg9, (int)arg10, (int)arg11, (int)arg12, (int)arg13, (int)arg14, (int)arg15, (int)arg16) -> list : C++ signature : boost::python::list spam(int,int,int,int,int, int,int,int,int,int, int,int,int,int,int, int) ``` Without defining the max arity, the same code [fails to compile](http://coliru.stacked-crooked.com/a/00e7b12f45cee091): ```cpp /usr/local/include/boost/python/make_function.hpp:104:36: error: no matching function for call to 'get_signature' f,default_call_policies(), detail::get_signature(f)); ^~~~~~~~~~~~~~~~~~~~~ ... failed template argument deduction ```
As @bogdan pointed the function returning boost::python::list is having 16 parameters and max boost python arity by default is set to 15. Use `#define BOOST_PYTHON_MAX_ARITY 16` to increase the limit or (better) consider wrapping parameters into struct.
2,398
20,997,283
Does anyone know of some `Python` package or function that can upload a Pandas `DataFrame` (or simply a `.csv`) to a PostgreSQL table, **even if the table doesn't yet exist**? (i.e. it runs a CREATE TABLE with the appropriate column names and columns types based on a mapping between the python data types and closest equivalents in PostgreSQL) In `R`, I use the `ROracle` package which provides a `dbWriteTable` function that does what I've described above. (see docs [here](http://cran.r-project.org/web/packages/ROracle/ROracle.pdf))
2014/01/08
[ "https://Stackoverflow.com/questions/20997283", "https://Stackoverflow.com", "https://Stackoverflow.com/users/176995/" ]
Since pandas 0.14, the sql functions also support postgresql (via SQLAlchemy, so all database flavors supported by SQLAlchemy work). So you can simply use `to_sql` to write a pandas DataFrame to a PostgreSQL database: ``` import pandas as pd from sqlalchemy import create_engine import psycopg2 engine = create_engine('postgresql://scott:tiger@localhost:5432/mydatabase') df.to_sql("table_name", engine) ``` See the docs: <http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries> If you have an older version of pandas (< 0.14), see this question:[How to write DataFrame to postgres table?](https://stackoverflow.com/questions/23103962/how-to-write-dataframe-to-postgres-table/23104436#23104436)
They just made a package for this. <https://gist.github.com/catawbasam/3164289> Not sure how well it works.
2,399
57,476,304
am getting below exception while trying to use multiprocessing with flask sqlalchemy. ``` sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically. [12/Aug/2019 18:09:52] "GET /api/resources HTTP/1.1" 500 - Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/SQLAlchemy-1.3.6-py3.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1244, in _execute_context cursor, statement, parameters, context File "/usr/local/lib/python3.7/site-packages/SQLAlchemy-1.3.6-py3.7-linux-x86_64.egg/sqlalchemy/engine/default.py", line 552, in do_execute cursor.execute(statement, parameters) psycopg2.DatabaseError: error with status PGRES_TUPLES_OK and no message from the libpq ``` Without multiprocessing the code works perfect, but when i add the multiprocessing as below, am running into this issue. ``` worker = multiprocessing.Process(target=<target_method_which_has_business_logic_with_DB>, args=(data,), name='PROCESS_ID', daemon=False) worker.start() return Response("Request Accepted", status=202) ``` I see an answer to similar question in SO (<https://stackoverflow.com/a/33331954/8085047>), which suggests to use engine.dispose(), but in my case am using db.session directly, not creating the engine and scope manually. Please help to resolve the issue. Thanks!
2019/08/13
[ "https://Stackoverflow.com/questions/57476304", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8085047/" ]
I had the same issue. Following Sam's link helped me solve it. Before I had (not working): ``` from multiprocessing import Pool with Pool() as pool: pool.map(f, [arg1, arg2, ...]) ``` This works for me: ``` from multiprocessing import get_context with get_context("spawn").Pool() as pool: pool.map(f, [arg1, arg2, ...]) ```
The answer from dibrovsd@github was really useful for me. If you are using a PREFORKING server like uwsgi or gunicorn, this would also help you. Post his comment here for your reference. > > Found. This happens when uwsgi (or gunicorn) starts when multiple workers are forked from the first process. > > If there is a request in the first process when it starts, then this opens a database connection and the connection is forked to the next process. But in the database, of course, no new connection is opened and a broken connection occurs. > > You had to specify lazy: true, lazy-apps: true (uwsgi) or preload\_app = False (gunicorn) > > In this case, add. workers do not fork, but run themselves and open their normal connections themselves > > > Refer to link: <https://github.com/psycopg/psycopg2/issues/281#issuecomment-985387977>
2,400
59,010,815
This is my code: I have used the find element by id RESULT\_RadioButton-7\_0, but I am getting the following error: ``` from selenium import webdriver from selenium.webdriver.common.by import By driver = webdriver.Chrome(executable_path="/home/real/Desktop/Selenium_with_python/SeleniumProjects/chromedriver_linux64/chromedriver") driver.get("https://fs2.formsite.com/meherpavan/form2/index.html?153770259640") radiostatus = driver.find_element(By.ID, "RESULT_RadioButton-7_0").click() ``` My error is this: > > elementClickInterceptedException: element click intercepted: Element is not clickable at point (40, 567). Other element would receive the click: <label for="RESULT\_RadioButton-7\_0">...</label> (Session info: chrome=78.0.3904.70) > > >
2019/11/23
[ "https://Stackoverflow.com/questions/59010815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11132456/" ]
Based on the page link you provided, it looks like your locator strategy is correct here. If you are getting an error—most likely `NoSuchElementException`, I am assuming it might have something to do with waiting for the page to load before attempting to find the element. Let's use the `ExpectedConditions` class to wait on the element to exist before locating it: ``` from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC # Add the above references to your .py file # Wait on the element to exist, and store its reference in radiostatus radiostatus = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "RESULT_RadioButton-7_0"))) # Click the element #radiostatus.click() # Click intercepted workaround: JavaScript click driver.execute_script("arguments[0].click();", radiostatus) ``` This will tick the radio button next to "Male" on the form.
Unless you need to wait on the element (which doesn't seem necessary), you should be able to do the following: ``` element_to_click_or_whatever = driver.find_element_by_id('RESULT_RadioButton-7_0') ``` If you look at the source for [`find_element_by_id`](https://github.com/SeleniumHQ/selenium/blob/master/py/selenium/webdriver/remote/webelement.py#L162), it calls `find_element` with `By.ID` as an argument: ``` def find_element_by_id(self, id_): return self.find_element(by=By.ID, value=id_) ``` IMO: `find_element_by_id` reads better, and it's one less package to import. I don't think your issue is finding the element; there's an `ElementClickInterceptedException` when trying to click on the element. For example, the radio button is located, but (strangely) Selenium doesn't think it's displayed. ``` from selenium import webdriver driver = webdriver.Chrome() driver.maximize_window() driver.get("https://fs2.formsite.com/meherpavan/form2/index.html?153770259640") radiostatus = driver.find_element_by_id('RESULT_RadioButton-7_0') if radiostatus: print('found') # Found print(radiostatus.is_displayed()) # False ```
2,401
3,631,556
I have found several topics with this title, but none of their solutions worked for me. I have two Django sites running on my server, both through Apache using different virtualhosts on two ports fed by my Nginx frontend (using for static files). One site uses MySql and runs just fine. The other uses Sqlite3 and gets the error in the title. I downloaded a copy of sqlite.exe and looked at the mysite.sqlite3 (SQLite database in this directory) file and there is indeed a django\_session table with valid data in it. I have the sqlite.exe in my system32 as well as the site-packages folder in my Python path. Here is a section of my settings.py file: ``` MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'mysite.sqlite3', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } ``` I did use the python manage.py syncdb with no errors and just a "No Fixtures" comment. Does anyone have any ideas what else might be going on here? I'm considering just transferring everything over to my old pal MySql and just ignoring Sqlite, as really it's always given me some kind of trouble. I was only using it for the benefit of knowing it anyway. I have no overwhelming reason why I should use it. But again, just for my edification does anyone know what this problem is? I don't like to give up.
2010/09/02
[ "https://Stackoverflow.com/questions/3631556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/438289/" ]
It could be that the server uses a different working directory than the `manage.py` command. Since you provide a relative path to the sqlite database, it is created in the working directory. Try it with an absolute path, e.g.: ``` 'NAME': '/tmp/mysite.sqlite3', ``` Remember that you have to either run `./manage.py syncdb` again or copy your current database with the existing tables to `/tmp`. If it resolves the error message, you can look for a better place than `/tmp` :-)
You have unapplied migrations. your app may not work properly until they are applied. Run 'python manage.py migrate' to apply them. python manage.py migrate This one worked for me.
2,403
47,249,474
I'm working on a python GUI application, using tkinter, which displays text in Hebrew. On Windows (10, python 3.6, tkinter 8.6) Hebrew strings are displayed fine. On Linux (Ubuntu 14, both python 3.4 and 3.6, tkinter 8.6) Hebrew strings are displayed incorrectly - with no BiDi awareness - **am I missing something?** I installed pybidi, and via `bidi.algorithm.get_display(hebrew_string)` - the strings are displayed correctly. But then, on Windows, `get_display(hebrew_string)` is displayed incorrectly. Is BiDi not supported on python-tkinter-Linux? Must I wrap each string with `get_display(string)`? Must I wrap `get_display(string)` with a `only_on_linux(...)` function?
2017/11/12
[ "https://Stackoverflow.com/questions/47249474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1499700/" ]
I searched a bit and it is a known issue that tk/tcl uses Windows bidi support since about 2011, but their is apparently nothing equivalent on linux. Example: <https://wiki.tcl.tk/3158>. One answer to [Python/Tkinter: Using Tkinter for RTL (right-to-left) languages like Arabic/Hebrew?](https://stackoverflow.com/questions/4150053/python-tkinter-using-tkinter-for-rtl-right-to-left-languages-like-arabic-hebr/7864523#7864523) has some workarounds for \*nix. I am not sure about Mac support with the latest tcl/tk. For cross-platform work you will need a function that echoes on Windows and reverses on your Ubuntu.
As on of the main authors of FriBidi and a contributor to the bidi text support in Gtk, I strongly suggest that you don't use TkInter for anything Hebrew or any other text other than Latin, Greek, or Cyrillic scripts. In theory you can rearrange the text ordering with the stand alone fribidi executable on on Linux, or use the fribidi binding, but bidi and complex language support goes well beyond that. You might need to support text inserting, cut and paste, shaping, just to mention a few of the pitfalls. You are much better off using the excellent gtk or Qt bindings to python.
2,413
24,872,243
I created and ImageField model for my blog app in my "test" django project on my local server using sqllite. I have in my settings.py `MEDIA_ROOT = '/Users/me/Sites/python/djangotut/media/' MEDIA_ROOT_URL = 'http://127.0.0.1:8000/media/images/photos/'` and my blog/models.py ``` photo = models.ImageField(upload_to='images/photos/') ``` but the problem is my blog.urls.py I dont know how to add the url to work with my patterns thats in the documentation <https://docs.djangoproject.com/en/1.6/howto/static-files/#serving-files-uploaded-by-a-user-during-development> ``` from django.conf.urls import url from django.conf.urls.static import static from .views import index, post urlpatterns = [ url( regex=r'^$', view=index, name='blog-index' ), url( regex=r'^(?P<slug>[\w\-]+)/$', view=post, name='blog-detail' ), ] ``` Also I have read something about urls being setup for a "production environment" for when you distribute apps. What would my urls need to look like in that case?
2014/07/21
[ "https://Stackoverflow.com/questions/24872243", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3142105/" ]
Use weight sum technique for layouts, so that the controls in your each line consumes the assigned percentage of space ( there won't be any need to put them in Grid or other UI Controls)
Use a nested ViewGroup: ``` <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context="com.example.helloworld.MainActivity" > <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal" android:text="@string/fractions" android:textSize="30sp" /> <RelativeLayout android:layout_width="fill_parent" android:layout_height="wrap_content"> <RadioGroup android:alignParentRight = "true" android:id="@+id/fractions" android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal" > <RadioButton android:id="@+id/fraction_true" android:layout_width="wrap_content" android:layout_height="wrap_content" android:checked="true" android:paddingRight="15dip" android:text="@string/fraction_true" android:textSize="30sp" android:gravity="right" android:textStyle="bold" /> <RadioButton android:id="@+id/fraction_false" android:layout_width="wrap_content" android:layout_height="wrap_content" android:paddingRight="15dip" android:text="@string/fraction_false" android:textSize="30sp" /> </RadioGroup> </RelativeLayout> </LinearLayout> ```
2,414
47,074,966
I am trying to create a simple test-scorer that grades your test and gives you a response - but a simple if/else function isn't running - Python - ``` testScore = input("Please enter your test score") if testScore <= 50: print "You didn't pass... sorry!" elif testScore >=60 and <=71: print "You passed, but you can do better!" ``` The Error is - ``` Traceback (most recent call last): File "python", line 6 elif testScore >= 60 and <= 71: ^ SyntaxError: invalid syntax ```
2017/11/02
[ "https://Stackoverflow.com/questions/47074966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You missed testScore in elif statement ``` testScore = input("Please enter your test score") if testScore <= 50: print "You didn't pass... sorry!" elif testScore >=60 and testScore<=71: print "You passed, but you can do better!" ```
The below shown way would be the better way of solving it, you always need to make the type conversion to integer when you are comparing/checking with numbers. > > input() in python would generally take as string > > > ``` testScore = input("Please enter your test score") if int(testScore) <= 50: print("You didn't pass... sorry!" ) elif int(testScore) >=60 and int(testScore)<=71: print("You passed, but you can do better!") ```
2,415
66,697,840
I guess once upon a time, I was able to find this information by Googling but not this time. I believe each script file (e.g. my.py, run.sh, etc) could have the path to an executable that is supposed to parse & run the script file. For example, a bash script file `run.sh` could start with: ``` #!/bin/bash ``` Then, my user will run it like: ``` $ ./run.sh ``` What if some users may not have `bash` there but has one under `/usr/sbin/`? Actually, my issue is Python3. Some users may have `python3` not as `/usr/bin/python3`. Some distros seem to install it as `/usr/bin/python37` while some other `/usr/bin/python`. Yet again, some do `$HOME/bin/virtualenv/python3`. At least, what could I do to tell any (future) user's shell that my script should be run by `which python`. Or, even better if I could tell "Try `which python3`, and if not available, try `which python`."
2021/03/18
[ "https://Stackoverflow.com/questions/66697840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7254686/" ]
If you want to pass json data with axios,you need to set `Content-Type`,here is a demo: axios(I use 1 to replace `${rockId}` to test): ``` var payload = "this is a test"; const request = axios.put(`/api/rocks/1/rockText`, JSON.stringify(payload), { headers: { 'Content-Type': 'application/json' } }); ``` Controller: ``` [HttpPut("{id}/rockText")] public IActionResult PutRockText(int id,[FromBody]string rock) { return Ok(); } ``` result: [![enter image description here](https://i.stack.imgur.com/Oyyjj.gif)](https://i.stack.imgur.com/Oyyjj.gif)
The issue is that the model binder cannot resolve the payload. The reason is that it's expecting a string, but you're actually passing a json object with a property `rockText`. I would create a class to represent the json you're sending: ``` public class Rock { public string RockText { get; set; } } [HttpPut("{id}/rockText")] public IActionResult PutRockText(Int32 id, [FromBody] Rock rock) { ... } ``` --- Alternatively, you could try passing the string from axios: ``` var payload = "this is a test"; const request = axios.put(`/api/rocks/${rockId}/rockText`, payload); ```
2,418
29,956,883
I am fairly new to python. I want to create a program that can generate random numbers and write them to a file, but I am curious to as whether it is possible to write the output to a `.txt` file, but in individual lists. (*every time the program executes the script, it creates a new list*) Here is my code so far: ``` def main(): import random data = open("Random.txt", "w" ) for i in range(int(input('How many random numbers?: '))): line = str(random.randint(1, 1000)) data.write(line + '\n') print(line) data.close() print('data has been written') main() ```
2015/04/30
[ "https://Stackoverflow.com/questions/29956883", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4848614/" ]
ABout append or `a` - > > Opens a file for appending. The file pointer is at the end of the file > if the file exists. That is, the file is in the append mode. If the > file does not exist, it creates a new file for writing. > > > ``` def main(): import random data = open("Random.txt", "a" ) #open file in append mode data.write('New run\n') #separator in file for i in range(int(input('How many random numbers?: '))): line = str(random.randint(1, 1000)) data.write(line + '\n') print(line) data.close() print('data has been written') main() ```
If you read through the documentation for [open()](https://docs.python.org/2/library/functions.html#open) you'll note: > > Modes 'r+', 'w+' and 'a+' open the file for updating (reading and > writing); note that 'w+' truncates the file. Append 'b' to the mode to > open the file in binary mode, on systems that differentiate between > binary and text files; on systems that don’t have this distinction, > adding the 'b' has no effect. > > > So use mode `a` if you want to "append to the open file. **Example:** ``` f = open("random.txt", "a") f.write(...) ``` **Update:** If you want to separate entries from subsequent program runs you'll have to append a line fo the file that your program *understand*. e.g: `f.write("!!!MARKER!!!\n")`
2,419
70,141,901
I have get\_Time function working fine but I would like to take the result it produces and store it int the "t" variable inside the function simple\_Interest function. Here is the code I have now. ``` y = input("Enter value for year: ") m = input("Enter value for month: ") p = input("Enter value for principle: ") r = input("Enter value for rate (in %): ") def get_Time(y, m, d): total_time = y + m / 12 + d / 365 return total_time print ("The total time in years is: " , get_Time(int(y), int(m), int(d))) def simple_Interest(t, p, r): simplint = p *(r / 100) * t return simplint ``` sorry if I sound like a dummy.. im still very newbish to python and programming in general but im learning. thanks in advance for your help.
2021/11/28
[ "https://Stackoverflow.com/questions/70141901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17529617/" ]
Try this. ``` static int indexOfLastNumber(String s) { int removedLength = s.replaceFirst("\\d+\\D*$", "").length(); return s.length() == removedLength ? 0 : removedLength; } static void test(String s) { System.out.println(s + " : " + indexOfLastNumber(s)); } public static void main(String[] args) { test("987abc<*(123"); test("987abc<*(123)"); test("123"); test("foo"); test(""); } ``` output: ``` 987abc<*(123 : 9 987abc<*(123) : 9 123 : 0 foo : 0 : 0 ``` or ``` static final Pattern LAST_NUMBER = Pattern.compile("\\d+\\D*$"); static int indexOfLastNumber(String s) { Matcher m = LAST_NUMBER.matcher(s); return m.find() ? m.start() : 0; } ```
Note: the '1' is at index 9 in your String. If you don't want, it not necessary to use RegEx for this. A method like this should do the job: ```java public static int findLastNumbersIndex(String s) { boolean numberFound = false; boolean charBeforeNumberFound = false; //start at the end of the String int index = s.length() - 1; //loop from the back to the front while there are more chars //and no nonDigit is found before a digit while (index >= 0 && !charBeforeNumberFound) { //when the first number was found, set the boolean flag if (!numberFound && Character.isDigit(s.charAt(index))) { numberFound = true; } //when already a number was found and there is any nonDigit stop the execution if (numberFound && !Character.isDigit(s.charAt(index))) { charBeforeNumberFound = true; break; } index--; } return index + 1; } ``` The execution for different Strings: ```java public static void main(String[] args) { System.out.println("\"987abc<*(123\"" + " index of lastNumberSet: " + findLastNumbersIndex("987abc<*(123")); System.out.println("\"987abc<*(123abc\"" + " index of lastNumberSet: " + findLastNumbersIndex("987abc<*(123abc")); System.out.println("\"987abc\"" + " index of lastNumberSet: " + findLastNumbersIndex("987abc")); System.out.println("\"abc987\"" + " index of lastNumberSet: " + findLastNumbersIndex("abc987")); System.out.println("\"987\"" + " index of lastNumberSet: " + findLastNumbersIndex("987")); System.out.println("(Empty String)" + " index of lastNumberSet: " + findLastNumbersIndex("")); System.out.println("\"abc\"" + " index of lastNumberSet: " + findLastNumbersIndex("abc")); } ``` returns this output: ``` "987abc<*(123" index of lastNumberSet: 9 "987abc<*(123abc" index of lastNumberSet: 9 "987abc" index of lastNumberSet: 0 "abc987" index of lastNumberSet: 3 "987" index of lastNumberSet: 0 (Empty String) index of lastNumberSet: 0 "abc" index of lastNumberSet: 0 ```
2,421
38,593,309
How do get logging from custom authorizer lambda function in API Gateway? I do not want to enable logging for API. I need logging from authorizer lambda function. I use a python lambda function and have prints in the code. I want to view the prints in **Cloud Watch** logs. But logs are not seen in cloud watch. I do not get errors either. What am I missing? Lambda has execution role **role/service-role/MyLambdaRole**. This role has the policy to write to cloud watch. ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "logs:CreateLogGroup", "Resource": "arn:aws:logs:us-east-1:123456:*" }, { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-east-1:123456:log-group:MyCustomAuthorizer:*" ] } ] } ``` I also tested by adding CloudWatchLogsFullAccess policy to **role/service-role/MyLambdaRole** role. ``` { "Version": "2012-10-17", "Statement": [ { "Action": [ "logs:*" ], "Effect": "Allow", "Resource": "*" } ] } ```
2016/07/26
[ "https://Stackoverflow.com/questions/38593309", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2184930/" ]
I deleted the lambda function, IAM role, custom authorizer from API Gateway. Recreated all the above with the same settings and published the API. It started working and logging as expected. I do not know what was preventing earlier to log to cloud watch logs. Weird!!
When I set up my authorizer, I set a Lambda Event payload for a custom header, and I had neglected to set that header in my browser session. According to the documentation at *<https://docs.aws.amazon.com/apigateway/latest/developerguide/configure-api-gateway-lambda-authorization-with-console.html>*, section 9b, the API Gateway will throw a 401 Unauthorized error without even executing the Lambda function. So that was the source of the problem.
2,424
838,991
I'm using pycurl to upload a file via put and python cgi script to receive the file on the server side. Essentially, the code on the server side is: ``` while True: next = sys.stdin.read(4096) if not next: break #.... write the buffer ``` This seems to work with text, but not binary files (I'm on windows). With binary files, the loop doing stdin.read breaks after receiving anything around 10kb to 100kb. Any ideas?
2009/05/08
[ "https://Stackoverflow.com/questions/838991", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You need to run Python in binary mode. Change your CGI script from: ``` #!C:/Python25/python.exe ``` or whatever it says to: ``` #!C:/Python25/python.exe -u ``` Or you can do it programmatically like this: ``` msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY) ``` before starting to read from `stdin`.
Use [mod\_wsgi](http://code.google.com/p/modwsgi/) instead of cgi. It will provide you an input file for the upload that's correctly opened.
2,425
40,762,324
I want to write a function to compare two values, val1 and val2, and if val1 is larger than val2, add 1 point to a\_points (Think of it like Team A) and vice versa (add one point to b\_points if val2 is larger.) If the two values are even I won't add any points to a\_points or b\_points. My problem is **test\_val will not return the values of a\_points or b\_points.** ``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: pass ``` [Here's a link to a visualization showing the problem.](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Aa_points%3D0%0Ab_points%3D0%0A%0Adef%20test_val(a_points,b_points,val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20a_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20a_points%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20b_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20b_points%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20pass%0A%0Atest_val(a_points,b_points,a0,b0%29%0Atest_val(a_points,b_points,a1,b1%29%0Atest_val(a_points,b_points,a2,b2%29%0A%0Aprint(a_points,b_points%29&cumulative=false&curInstr=13&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)
2016/11/23
[ "https://Stackoverflow.com/questions/40762324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7017454/" ]
Global variables are generally a **bad idea**. Don't use them unless you really have to. The proper way to implement such counter is to use a class. ``` class MyCounter(object): def __init__(self): self.a_points = 0 self.b_points = 0 def test_val(self, val1, val2): if val1 > val2: self.a_points += 1 elif val2 > val1: self.b_points += 1 else: pass counter = MyCounter() counter.test_val(1, 2) counter.test_val(1, 3) counter.test_val(5, 3) print(counter.a_points, counter.b_points) ``` Output: ``` (1, 2) ``` Note that returning a value from `test_val` doesn't make sense, because caller has no way to know if she gets `a_points` or `b_points`, so she can't use return value in any meaningful way.
``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): global a_points global b_points if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: # If you pass, it won't return a_points nor b_points return a_points # or b_points ```
2,426
38,044,264
``` import pandas as pd import numpy as np from datetime import datetime, time # history file and batch size for processing. historyFilePath = 'EURUSD.SAMPLE.csv' batch_size = 5000 # function for date parsing dateparse = lambda x: pd.datetime.strptime(x, '%Y-%m-%d %H:%M:%S.%f') # load data into a pandas iterator with all the chunks ratesFromCSVChunks = pd.read_csv(historyFilePath, index_col=0, engine='python', parse_dates=True, date_parser=dateparse, header=None, names=["datetime", "1_Current", "2_BidPx", "3_BidSz", "4_AskPx", "5_AskSz"], iterator=True, chunksize=batch_size) # concatenate chunks to get the final array ratesFromCSV = pd.concat([chunk for chunk in ratesFromCSVChunks]) # save final csv file df.to_csv('EURUSD_processed.csv', date_format='%Y-%m-%d %H:%M:%S.%f', columns=['1_Current', '2_BidPx', '3_BidSz', '4_AskPx', '5_AskSz'], header=False, float_format='%.5f') ``` I am reading a CSV file containing forex data in the format ``` 2014-08-17 17:00:01.000000,1.33910,1.33910,1.00000,1.33930,1.00000 2014-08-17 17:00:01.000000,1.33910,1.33910,1.00000,1.33950,1.00000 2014-08-17 17:00:02.000000,1.33910,1.33910,1.00000,1.33930,1.00000 2014-08-17 17:00:02.000000,1.33900,1.33900,1.00000,1.33940,1.00000 2014-08-17 17:00:04.000000,1.33910,1.33910,1.00000,1.33950,1.00000 2014-08-17 17:00:05.000000,1.33930,1.33930,1.00000,1.33950,1.00000 2014-08-17 17:00:06.000000,1.33920,1.33920,1.00000,1.33960,1.00000 2014-08-17 17:00:06.000000,1.33910,1.33910,1.00000,1.33950,1.00000 2014-08-17 17:00:08.000000,1.33900,1.33900,1.00000,1.33942,1.00000 2014-08-17 17:00:16.000000,1.33900,1.33900,1.00000,1.33940,1.00000 ``` How do you convert from Datatime in the CSV file or pandas dataframe being read to EPOCH time in MILLISECONDS from MIDNIGHT ( UTC or localized ) by the time it is being saved. Each file Starts at Midnight every day . The only thing being changed is the format of datetime to miilliseconds from midnight every day( UTC or localized) . The format i am looking for is: ``` 43264234, 1.33910,1.33910,1.00000,1.33930,1.00000 43264739, 1.33910,1.33910,1.00000,1.33950,1.00000 43265282, 1.33910,1.33910,1.00000,1.33930,1.00000 43265789, 1.33900,1.33900,1.00000,1.33940,1.00000 43266318, 1.33910,1.33910,1.00000,1.33950,1.00000 43266846, 1.33930,1.33930,1.00000,1.33950,1.00000 43267353, 1.33920,1.33920,1.00000,1.33960,1.00000 43267872, 1.33910,1.33910,1.00000,1.33950,1.00000 43268387, 1.33900,1.33900,1.00000,1.33942,1.00000 ``` Any help is well appreciated ( short & precise in Python 3.5 or Python 3.4 and above with Pandas 0.18.1 and numpy 1.11 )
2016/06/26
[ "https://Stackoverflow.com/questions/38044264", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5310427/" ]
This snippet of code should be what you want ``` # Create some fake data, similar to yours import pandas as pd s = pd.Series(pd.date_range('2014-08-17 17:00:01.1230000', periods=4)) print(s) print(type(s[0])) # Create a new series using just the date portion of the original data. # This effectively truncates the time portion. # Can't use d = s.dt.date or you'll get date objects back, not datetime64. d = pd.to_datetime(s.dt.date) print(d) print(type(d[0])) # Calculate the time delta between the original datetime and # just the date portion. This is the elapsed time since your epoch. delta_t = s-d print(delta_t) # Display the elapsed time as seconds. print(delta_t.dt.total_seconds()) ``` This results in the following output ``` 0 2014-08-17 17:00:01.123 1 2014-08-18 17:00:01.123 2 2014-08-19 17:00:01.123 3 2014-08-20 17:00:01.123 dtype: datetime64[ns] <class 'pandas.tslib.Timestamp'> 0 2014-08-17 1 2014-08-18 2 2014-08-19 3 2014-08-20 dtype: datetime64[ns] <class 'pandas.tslib.Timestamp'> 0 17:00:01.123000 1 17:00:01.123000 2 17:00:01.123000 3 17:00:01.123000 dtype: timedelta64[ns] 0 61201.123 1 61201.123 2 61201.123 3 61201.123 dtype: float64 ```
Here's how I did it with my data: ``` import pandas as pd import numpy as np rng = pd.date_range('1/1/2011', periods=72, freq='H') df = pd.DataFrame({"Data": np.random.randn(len(rng))}, index=rng) df["Time_Since_Midnight"] = (df.index - pd.to_datetime(df.index.date)) / np.timedelta64(1, 'ms') ``` By converting the `DateTimeIndex` into a `date` object, we drop off the hours and seconds. Then by taking the difference of the two, you get a `timedelta64` object, which you can then format into milliseconds. Here's the output I get (the last column is the time since midnight): ``` 2011-01-01 00:00:00 2.383501 0.0 2011-01-01 01:00:00 0.725419 3600000.0 2011-01-01 02:00:00 -0.361533 7200000.0 2011-01-01 03:00:00 2.311185 10800000.0 2011-01-01 04:00:00 1.596148 14400000.0 ```
2,436
32,778,316
I am a vim user and edited a large python file using vim, everything is OK and it could run properly. Now I want to build a huge projects and I want to edit this python file in Intellij, but the indentation in intellij is completely wrong, and it's hard for me to edit one line by one line. Do you know what happened? (if the edit some lines in Intellij to remove the indentation error, when I display them in vim, they are wrong indentation as well)
2015/09/25
[ "https://Stackoverflow.com/questions/32778316", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3390810/" ]
Yes, use [perfect forwarding](https://stackoverflow.com/questions/3582001/advantages-of-using-forward): ``` template <typename P> bool VectorList::put (P &&p) { //can't forward p here as it could move p and we need it later if (not_good_for_insert(p)) return false; // ... Node node = create_node(); node.pair = std::forward<P>(p); // ... return true; } ``` Another possibility is to just pass by value like in [Maxim's answer](https://stackoverflow.com/a/32778410/496161). The advantage of the perfect-forwarding version is that it requires no intermediate conversions if you pass in compatible arguments and performs better if moves are expensive. The disadvantage is that forwarding reference functions are very greedy, so other overloads might not act how you want. Note that `Pair &&p` is not a universal reference, it's just an rvalue reference. Universal (or forwarding) references require an rvalue in a deduced context, like template arguments.
The ideal solution is to accept a universal reference, as [TartanLlama](https://stackoverflow.com/a/32778379/412080) advises. The ideal solution works if you can afford having the function definition in the header file. If your function definition cannot be exposed in the header (e.g. you employ Pimpl idiom or interface-based design, or the function resides in a shared library), the second best option is to accept by value. This way the caller can choose how to construct the argument (copy, move, uniform initialization). The callee will have to pay the price of one move though. E.g. `bool VectorList::put(Pair p);`: ``` VectorList v; Pair p { "key", "value" }; v.put(p); v.put(std::move(p)); v.put(Pair{ "anotherkey", "anothervalue" }); v.put({ "anotherkey", "anothervalue" }); ``` And in the implementation you move from the argument: ``` bool VectorList::put(Pair p) { container_.push_back(std::move(p)); } ``` --- Another comment is that you may like to stick with standard C++ names for container operations, like `push_back/push_front`, so that it is clear what it does. `put` is obscure and requires readers of your code to look into the source code or documentation to understand what is going on.
2,437
13,096,339
> > **Possible Duplicate:** > > [Python Question: Year and Day of Year to date?](https://stackoverflow.com/questions/2427555/python-question-year-and-day-of-year-to-date) > > > Is there a method in Python to figure out which month a certain day of the year is in, e.g. today is day 299 (October 26th). I would like to figure out, that day 299 is in month 10 (to compile the string to set the Linux system time). How can I do this?
2012/10/27
[ "https://Stackoverflow.com/questions/13096339", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1198201/" ]
``` print (datetime.datetime(2012,1,1) + datetime.timedelta(days=299)).month ``` Here's a little more usable version that returns both the month and day: ``` def get_month_day(year, day, one_based=False): if one_based: # if Jan 1st is 1 instead of 0 day -= 1 dt = datetime.datetime(year, 1, 1) + datetime.timedelta(days=day) return dt.month, dt.day >>> get_month_day(2012, 299) (10, 26) ```
I know of no such method, but you can do it like this: ``` print datetime.datetime.strptime('2012 299', '%Y %j').month ``` The above prints `10`
2,440
18,897,631
Guys i'm a newbie to the socket programming Following program is a client program which request a file from the server,But i'm getting the error as show below.. My input is GET index.html and the code is Can anyone solve this error...? ``` #!/usr/bin/env python import httplib import sys http_server = sys.argv[0] conn = httplib.HTTPConnection(http_server) while 1: cmd = raw_input('input command (ex. GET index.html): ') cmd = cmd.split() if cmd[0] == 'exit': break conn.request(cmd[0],cmd[1]) rsp = conn.getresponse() print(rsp.status, rsp.reason) data_received = rsp.read() print(data_received) conn.close() input command (ex. GET index.html): GET index.html Traceback (most recent call last): File "./client1.py", line 19, in <module> conn.request(cmd[0],cmd[1]) File "/usr/lib/python2.6/httplib.py", line 910, in request self._send_request(method, url, body, headers) File "/usr/lib/python2.6/httplib.py", line 947, in _send_request self.endheaders() File "/usr/lib/python2.6/httplib.py", line 904, in endheaders self._send_output() File "/usr/lib/python2.6/httplib.py", line 776, in _send_output self.send(msg) File "/usr/lib/python2.6/httplib.py", line 735, in send self.connect() File "/usr/lib/python2.6/httplib.py", line 716, in connect self.timeout) File "/usr/lib/python2.6/socket.py", line 500, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): socket.gaierror: [Errno -2] Name or service not known ```
2013/09/19
[ "https://Stackoverflow.com/questions/18897631", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2795866/" ]
sys.argv[0] is not what you think it is. sys.argv[0] is the name of the program or script. The script's first argument is sys.argv[1].
The problem is that the first item in `sys.argv` is the script name. So your script is actually using your filename as the hostname. Change the 5th line to: ``` http_server = sys.argv[1] ``` [More info here.](http://docs.python.org/2/library/sys.html#sys.argv)
2,441