qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
9,170,271 | I am trying to flip a picture on its vertical axis, I am doing this in python, and using the Media module.
like this:
![enter image description here](https://i.stack.imgur.com/j9woa.jpg)
i try to find the relationship between the original and the flipped. since i can't go to negative coordinates in python, what i decided to do is use the middle of the picture as the reference.
so i split the picture in half,and this is what i am going to do:
[note i create a new blank picture and copy each (x,y) pixel to the corresponding to (-x,y), if the original pixel is after the middle.
if its before the middle, i copy the pixel (-x,y) to (x,y)
![enter image description here](https://i.stack.imgur.com/YkhwK.jpg)
so i coded it in python, and this is the result.
Original:
![enter image description here](https://i.stack.imgur.com/HoYO0.jpg)
i got this:
```
import media
pic=media.load_picture(media.choose_file())
height=media.get_height(pic)
width=media.get_width(pic)
new_pic=media.create_picture(width,height)
for pixel in pic:
x_org=media.get_x(pixel)
y_org=media.get_y(pixel)
colour=media.get_color(pixel)
new_pixel_0=media.get_pixel(new_pic,x_org+mid_width,y_org) #replace with suggested
#answer below
media.set_color( new_pixel_0,colour)
media.show(new_pic)
```
![enter image description here](https://i.stack.imgur.com/mEpNS.jpg)
this is not what i wanted, but i am so confused, i try to find the relationship between the original pixel location and its transformed (x,y)->(-x,y). but i think that's wrong. If anyone could help me with this method it would be great full.
at the end of the day i want a picture like this:
![enter image description here](https://i.stack.imgur.com/8aVdU.jpg)
<http://www.misterteacher.com/alphabetgeometry/transformations.html#Flip> | 2012/02/07 | [
"https://Stackoverflow.com/questions/9170271",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1090782/"
] | Why not just use Python Imaging Library? Flipping an image horizontally is a one-liner, and much faster to boot.
```
from PIL import Image
img = Image.open("AFLAC.jpg").transpose(Image.FLIP_LEFT_RIGHT)
``` | Your arithmetic is incorrect. Try this instead...
```
new_pixel_0 = media.get_pixel(new_pic, width - x_org, y_org)
```
There is no need to treat the two halves of the image separately.
This is essentially negating the *x*-co-ordinate, as your first diagram illustrates, but then slides (or translates) the flipped image by *width* pixels to the right to put it back in the range (0 - *width*). | 236 |
39,030,546 | Try to run Example 7-11 of **High Performance Python**
**cython\_np.pyx**
```
#cython_np.pyx
import numpy as np
cimport numpy as np
def calculate_z(int maxiter, double complex[:] zs, double complex[:] cs):
cdef unsigned int i, n
cdef double complex z, c
cdef int[:] output = np.empty(len(zs), dtype = np.int32)
for i in range(len(zs)):
n = 0
z = zs[i]
c = cs[i]
while n < maxiter and (z.real * z.real + z.imag * z.imag) < 4:
z = z * z + c
n += 1
output[i] = n
return output
```
**setup.py**
```
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(
cmdclass = {'build_ext':build_ext},
ext_modules = [Extension("calculate", ["cythonfn.pyx"])]
)
```
In the terminal , ubuntu 16.04
```
python3 setup.py build_ext --inplace
```
get some warning
```
running build_ext
cythoning cythonfn.pyx to cythonfn.c
building 'calculate' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.5m -c cythonfn.c -o build/temp.linux-x86_64-3.5/cythonfn.o
In file included from /usr/include/python3.5m/numpy/ndarraytypes.h:1777:0,
from /usr/include/python3.5m/numpy/ndarrayobject.h:18,
from /usr/include/python3.5m/numpy/arrayobject.h:4,
from cythonfn.c:274:
/usr/include/python3.5m/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it by " \
^
In file included from /usr/include/python3.5m/numpy/ndarrayobject.h:27:0,
from /usr/include/python3.5m/numpy/arrayobject.h:4,
from cythonfn.c:274:
/usr/include/python3.5m/numpy/__multiarray_api.h:1448:1: warning: ‘_import_array’ defined but not used [-Wunused-function]
_import_array(void)
^
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.5/cythonfn.o -o MY_DIR/calculate.cpython-35m-x86_64-linux-gnu.so
```
when I try to run use function **calculate.calculate.z** in Ipython, it says
```
TypeError: a bytes-like object is required, not 'list'
```
[detail of using calculate.z](http://i.stack.imgur.com/UQfFM.png)
Any idea about the warning? | 2016/08/19 | [
"https://Stackoverflow.com/questions/39030546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6536252/"
] | I can't see any obvious "faults" with your sql.
However, if student 12345 is missing in any way data from (dcis, studentsdcid, guardianid, externalident, student\_number) or there are no matching data in any of the tables. Then no record will be returned since you are using inner joins.
2 suggestions:
\*Try changing the inner joins to left joins when you try searching for student 12345. If it returns any data you will then see what might be missing
\*Try searching for a student which appear in the list from the first sql statement. If this still does not return any record then you might have to restructure your sql statement | That's probably cause no any record matches with those condition in place since it's `AND`. Try making that last condition to a `OR` condition and see like
```
WHERE pcs.SCHOOLID=9
AND pcs.FIELD_NAME='web_password'
AND s.ENROLL_STATUS=0
OR s.STUDENT_NUMBER=12345
``` | 239 |
9,434,205 | The code below is streaming the twitter public timeline for a variable which output any tweets to the console. I'd like the save the same variables (status.text, status.author.screen\_name, status.created\_at, status.source) into an sqlite database. I'm getting an syntax error when my script sees a tweet and nothing is written to the sqlite database.
the error:
```
$ python stream-v5.py @lunchboxhq
Filtering the public timeline for "@lunchboxhq"RT @LunchboxHQ: test 2 LunchboxHQ 2012-02-29 18:03:42 Echofon
Encountered Exception: near "?": syntax error
```
the code:
```
import sys
import tweepy
import webbrowser
import sqlite3 as lite
# Query terms
Q = sys.argv[1:]
sqlite3file='/var/www/twitter.lbox.com/html/stream5_log.sqlite'
CONSUMER_KEY = ''
CONSUMER_SECRET = ''
ACCESS_TOKEN = ''
ACCESS_TOKEN_SECRET = ''
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
con = lite.connect(sqlite3file)
cur = con.cursor()
cur.execute("CREATE TABLE TWEETS(txt text, author text, created int, source text)")
class CustomStreamListener(tweepy.StreamListener):
def on_status(self, status):
try:
print "%s\t%s\t%s\t%s" % (status.text,
status.author.screen_name,
status.created_at,
status.source,)
cur.executemany("INSERT INTO TWEETS(?, ?, ?)", (status.text,
status.author.screen_name,
status.created_at,
status.source))
except Exception, e:
print >> sys.stderr, 'Encountered Exception:', e
pass
def on_error(self, status_code):
print >> sys.stderr, 'Encountered error with status code:', status_code
return True # Don't kill the stream
def on_timeout(self):
print >> sys.stderr, 'Timeout...'
return True # Don't kill the stream
streaming_api = tweepy.streaming.Stream(auth, CustomStreamListener(), timeout=60)
print >> sys.stderr, 'Filtering the public timeline for "%s"' % (' '.join(sys.argv[1:]),)
streaming_api.filter(follow=None, track=Q)
``` | 2012/02/24 | [
"https://Stackoverflow.com/questions/9434205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1039166/"
] | You are missing a closing parenthesis on the last line of the following code (lines 34–37 from what you posted):
```
cur.executemany("INSERT INTO TWEETS(?, ?, ?)", (status.text,
status.author.screen_name,
status.created_at,
status.source)
```
Just add a parenthesis to close the method call immediately after your tuple parameter. | ```
import sqlite3 as lite
con = lite.connect('test.db')
cur = con.cursor()
cur.execute("CREATE TABLE TWEETS(txt text, author text, created int, source text)")
```
then later:
```
cur.executemany("INSERT INTO TWEETS(?, ?, ?, ?)", (status.text,
status.author.screen_name,
status.created_at,
status.source))
``` | 241 |
60,513,468 | I read from python3 document, that python use hash table for dict(). So the search time complexity should be O(1) with O(N) as the worst case. However, recently as I took a course, the teacher says that happens only when you use int as the key. If you use a string of length L as keys the search time complexity is O(L).
I write a code snippet to test out his honesty
```py
import random
import string
from time import time
import matplotlib.pyplot as plt
def randomString(stringLength=10):
"""Generate a random string of fixed length """
letters = string.ascii_lowercase
return ''.join(random.choice(letters) for i in range(stringLength))
def test(L):
#L: int length of keys
N = 1000 # number of keys
d = dict()
for i in range(N):
d[randomString(L)] = None
tic = time()
for key in d.keys():
d[key]
toc = time() - tic
tic = time()
for key in d.keys():
pass
t_idle = time() - tic
t_total = toc - t_idle
return t_total
L = [i * 10000 for i in range(5, 15)]
ans = [test(l) for l in L]
plt.figure()
plt.plot(L, ans)
plt.show()
```
The result is very interesting. As you can see, the x-axis is the length of the strings used as keys and the y-axis is the total time to query all 1000 keys in the dictionary.
[![enter image description here](https://i.stack.imgur.com/7tkOr.png)](https://i.stack.imgur.com/7tkOr.png)
Can anyone explain this result?
Please be gentle on me. As you can see, if I ask this basic question, that means I don't have the ability to read python source code or equivalently complex insider document. | 2020/03/03 | [
"https://Stackoverflow.com/questions/60513468",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7037749/"
] | Since a dictionary is a hashtable, and looking up a key in a hashtable requires computing the key's hash, then the time complexity of looking up the key in the dictionary cannot be less than the time complexity of the hash function.
In current versions of CPython, a string of length L takes O(L) time to compute the hash of if it's the first time you've hashed that particular string object, and O(1) time if the hash for that string object has already been computed (since the hash is stored):
```py
>>> from timeit import timeit
>>> s = 'b' * (10**9) # string of length 1 billion
>>> timeit(lambda: hash(s), number=1)
0.48574538500002973 # half a second
>>> timeit(lambda: hash(s), number=1)
5.301000044255488e-06 # 5 microseconds
```
So that's also how long it takes when you look up the key in a dictionary:
```py
>>> s = 'c' * (10**9) # string of length 1 billion
>>> d = dict()
>>> timeit(lambda: s in d, number=1)
0.48521506899999167 # half a second
>>> timeit(lambda: s in d, number=1)
4.491000026973779e-06 # 5 microseconds
```
You also need to be aware that a key in a dictionary is not looked up *only* by its hash: when the hashes match, it still needs to test that the key you looked up is equal to the key used in the dictionary, in case the hash matching is a false positive. Testing equality of strings takes O(L) time in the worst case:
```py
>>> s1 = 'a'*(10**9)
>>> s2 = 'a'*(10**9)
>>> timeit(lambda: s1 == s2, number=1)
0.2006020820001595
```
So for a key of length L and a dictionary of length n:
* If the key is not present in the dictionary, and its hash has already been cached, then it takes O(1) average time to confirm it is absent.
* If the key is not present and its hash has not been cached, then it takes O(L) average time because of computing the hash.
* If the key is present, it takes O(L) average time to confirm it is present whether or not the hash needs to be computed, because of the equality test.
* The worst case is always O(nL) because if every hash collides and the strings are all equal except in the last places, then a slow equality test has to be done n times. | >
> only when you use int as the key. If you use a string of length L as keys the search time complexity is O(L)
>
>
>
Just to address a point not covered by kaya3's answer....
### Why people often say a hash table insertion, lookup or erase is a O(1) operation.
For many real-world applications of hash tables, the typical length of keys doesn't tend to grow regardless of how many keys you're storing. For example, if you made a hash set to store the names in a telephone book, the average name length for the first 100 people is probably very close to the average length for absolutely everyone. For that reason, the time spent to look for a name is no worse when you have a set of ten million names, versus that initial 100 (this kind of analysis normally ignores the performance impact of CPU cache sizes, and RAM vs disk speeds if your program starts swapping). You can reason about the program without thinking about the length of the names: e.g. inserting a million names is likely to take roughly a thousand times longer than inserting a thousand.
Other times, an application has a hash tables where the key may vary significantly. Imagine say a hash set where the keys are binary data encoding videos: one data set is old Standard Definition 24fps video clips, while another is 8k UHD 60fps movies. The time taken to insert these sets of keys won't simply be in the ratio of the numbers of such keys, because there's *vastly* different amounts of work involved in key hashing and comparison. In this case - if you want to reason about insertion time for different sized keys, a big-O performance analysis would be useless without a related factor. You could still describe the relative performance for data sets with similar sized keys considering only the normal hash table performance characteristics. When key hashing times could become a problem, you may well want to consider whether your application design is still a good idea, or whether e.g. you could have used a set of say filenames instead of the raw video data. | 246 |
38,798,816 | I have anaconda installed and also I have downloaded Spark 1.6.2. I am using the following instructions from this answer to configure spark for Jupyter [enter link description here](https://stackoverflow.com/questions/33064031/link-spark-with-ipython-notebook)
I have downloaded and unzipped the spark directory as
```
~/spark
```
Now when I cd into this directory and into bin I see the following
```
SFOM00618927A:spark $ cd bin
SFOM00618927A:bin $ ls
beeline pyspark run-example.cmd spark-class2.cmd spark-sql sparkR
beeline.cmd pyspark.cmd run-example2.cmd spark-shell spark-submit sparkR.cmd
load-spark-env.cmd pyspark2.cmd spark-class spark-shell.cmd spark-submit.cmd sparkR2.cmd
load-spark-env.sh run-example spark-class.cmd spark-shell2.cmd spark-submit2.cmd
```
I have also added the environment variables as mentioned in the above answer to my .bash\_profile and .profile
Now in the spark/bin directory first thing I want to check is if pyspark command works on shell first.
So I do this after doing cd spark/bin
```
SFOM00618927A:bin $ pyspark
-bash: pyspark: command not found
```
As per the answer after following all the steps I can just do
```
pyspark
```
in terminal in any directory and it should start a jupyter notebook with spark engine. But even the pyspark within the shell is not working forget about making it run on juypter notebook
Please advise what is going wrong here.
Edit:
I did
```
open .profile
```
at home directory and this is what is stored in the path.
```
export PATH=/Users/854319/anaconda/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/Users/854319/spark/bin
export PYSPARK_DRIVER_PYTHON=ipython
export PYSPARK_DRIVER_PYTHON_OPTS='notebook' pyspark
``` | 2016/08/05 | [
"https://Stackoverflow.com/questions/38798816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2769240/"
] | 1- You need to set `JAVA_HOME` and spark paths for the shell to find them. After setting them in your `.profile` you may want to
```
source ~/.profile
```
to activate the setting in the current session. From your comment I can see you're already having the `JAVA_HOME` issue.
Note if you have `.bash_profile` or `.bash_login`, `.profile` will not work as described [here](http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_01.html)
2- When you are in `spark/bin` you need to run
```
./pyspark
```
to tell the shell that the target is in the current folder. | For anyone who came here during or after MacOS Catalina, make sure you're establishing/sourcing variables in **zshrc** and not **bash**.
`$ nano ~/.zshrc`
```
# Set Spark Path
export SPARK_HOME="YOUR_PATH/spark-3.0.1-bin-hadoop2.7"
export PATH="$SPARK_HOME/bin:$PATH"
# Set pyspark + jupyter commands
export PYSPARK_SUBMIT_ARGS="pyspark-shell"
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS='lab' pyspark
```
`$ source ~/.zshrc`
`$ pyspark` # Automatically opens Jupyter Lab w/ PySpark initialized. | 247 |
38,412,184 | I'm trying to free memory allocated to a `CString`and passed to Python using ctypes. However, Python is crashing with a malloc error:
```none
python(30068,0x7fff73f79000) malloc: *** error for object 0x103be2490: pointer being freed was not allocated
```
Here are the Rust functions I'm using to pass the pointer to ctypes:
```
#[repr(C)]
pub struct Array {
pub data: *const c_void,
pub len: libc::size_t,
}
// Build &mut[[f64; 2]] from an Array, so it can be dropped
impl<'a> From<Array> for &'a mut [[f64; 2]] {
fn from(arr: Array) -> Self {
unsafe { slice::from_raw_parts_mut(arr.data as *mut [f64; 2], arr.len) }
}
}
// Build an Array from a Vec, so it can be leaked across the FFI boundary
impl<T> From<Vec<T>> for Array {
fn from(vec: Vec<T>) -> Self {
let array = Array {
data: vec.as_ptr() as *const libc::c_void,
len: vec.len() as libc::size_t,
};
mem::forget(vec);
array
}
}
// Build a Vec from an Array, so it can be dropped
impl From<Array> for Vec<[f64; 2]> {
fn from(arr: Array) -> Self {
unsafe { Vec::from_raw_parts(arr.data as *mut [f64; 2], arr.len, arr.len) }
}
}
// Decode an Array into a Polyline
impl From<Array> for String {
fn from(incoming: Array) -> String {
let result: String = match encode_coordinates(&incoming.into(), 5) {
Ok(res) => res,
// we don't need to adapt the error
Err(res) => res
};
result
}
}
#[no_mangle]
pub extern "C" fn encode_coordinates_ffi(coords: Array) -> *mut c_char {
let s: String = coords.into();
CString::new(s).unwrap().into_raw()
}
```
And the one I'm using to free the pointer when it's returned by Python
```
pub extern "C" fn drop_cstring(p: *mut c_char) {
unsafe { CString::from_raw(p) };
}
```
And the Python function I'm using to convert the pointer to a `str`:
```
def char_array_to_string(res, _func, _args):
""" restype is c_void_p to prevent automatic conversion to str
which loses pointer access
"""
converted = cast(res, c_char_p)
result = converted.value
drop_cstring(converted)
return result
```
And the Python function I'm using to generate the `Array` struct to pass into Rust:
```
class _FFIArray(Structure):
"""
Convert sequence of float lists to a C-compatible void array
example: [[1.0, 2.0], [3.0, 4.0]]
"""
_fields_ = [("data", c_void_p),
("len", c_size_t)]
@classmethod
def from_param(cls, seq):
""" Allow implicit conversions """
return seq if isinstance(seq, cls) else cls(seq)
def __init__(self, seq, data_type = c_double):
arr = ((c_double * 2) * len(seq))()
for i, member in enumerate(seq):
arr[i][0] = member[0]
arr[i][1] = member[1]
self.data = cast(arr, c_void_p)
self.len = len(seq)
```
`argtype` and `restype` definitions:
```
encode_coordinates = lib.encode_coordinates_ffi
encode_coordinates.argtypes = (_FFIArray,)
encode_coordinates.restype = c_void_p
encode_coordinates.errcheck = char_array_to_string
drop_cstring = lib.drop_cstring
drop_cstring.argtypes = (c_char_p,)
drop_cstring.restype = None
```
I'm inclined to think it's not the Rust functions, because a dylib crash would cause a segfault (and the FFI tests pass on the Rust side). I can also continue with other operations in Python after calling the FFI functions – the malloc error occurs when the process exits. | 2016/07/16 | [
"https://Stackoverflow.com/questions/38412184",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/416626/"
] | Please try this command I have resolved it by this command
```
sudo apt-get install libfontconfig
``` | Try add onError event to pipe
```
converter.image(req, { format: "png" , quality: 75 }).pipe(res).on('error', function(e){ console.log(e); });
``` | 249 |
26,509,222 | I have a list of python strings which are in a list.
I want to call split method at each string in the list and store the results in another list without using loops because the list is very long.
**EDIT1**
Here is one example
```
input = ["a,the,an","b,b,c","people,downvoting,it,must,think,first"]
output [["a","the","an"],["b","b","c"],["people","downvoting","it","must","think","first"]]
```
The delimiter used for splitting is ","
Any trick for this? | 2014/10/22 | [
"https://Stackoverflow.com/questions/26509222",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/623300/"
] | ```
[a.split(',') for a in list]
Sample: ['a,c,b','1,2,3']
Result: [['a','c','b'],['1','2','3']]
```
If you wanted everything in one list, you could try this (not sure of how efficient it is)
```
output = sum([a.split(',') for a in list],[])
Sample: ['a,c,b','1,2,3']
Result: ['a','c','b','1','2','3']
``` | Use list comprehensions.
```
mystrings = ["hello world", "this is", "a list", "of interesting", "strings"]
splitby = " "
mysplits = [x.split(splitby) for x in mystrings]
```
No idea if it performs better than a `for` loop, but there you go. | 250 |
27,580,550 | I develop python app which connect to Prolog via pyswip.
The following code is when I ask a question from prolog.
```
self.prolog = Prolog()
self.prolog.consult("Checker.pl")
self.prolog.query("playX")
```
This is the sample of my Prolog code
```
playX :-
init(B),
assert(min_to_move(x/_)),assert(max_to_move(o/_)),
play(human, x, B).
```
When query ("playX"), there is a message
```
Exception AttributeError: 'swipl_qid' in <bound method _QueryWrapper.__del__ of <pyswip.prolog._QueryWrapper object at 0x0000000004620288>> ignored
```
What happen?
Ps. I use all 64 bit: Python 2.7, SWI-Prolog, pyswip, Visual Studio 2013 | 2014/12/20 | [
"https://Stackoverflow.com/questions/27580550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3050141/"
] | In your style.css add this code
```
#toggle-menu li {
float: right;
list-style-type: none;
}
```
See [here](http://i.stack.imgur.com/aznP5.png) for an example of it in action.
The reason that dot is there is that you're adding it as a list element -- it's not a full stop, necessarily, just the marker for a new element in an unordered list. `list-style-type:none` gets rid of any style for the list elements. | It's not a full stop, it's a list item bullet. You're using a list with `<li>` tags, and the default behaviour is to put a bullet in front of whatever is inside the `<li>`
The real answer here though is that your code isn't very semantically correct. Why is an icon inside of an unordered list in the first place? Consider two other options...
1) Just putting `<a>` containing your icon in front of the nav and leaving it at that
2) Incorporating the font awesome icon in to a :before or :after psuedo-element of the nav menu itself using CSS styling. Information about how to add icons via CSS can be found on the font awesome site. | 253 |
26,909,770 | i am looking for a way to print all internal decimal places of a python decimal. has anyone an idea how to achieve following. The example code is written in Python.
```
from decimal import *
bits = 32
precision = Decimal(1) / Decimal(2**bits)
val = decimal(1078947848)
```
what happens now for following if i multiply val \* precision i get following result
```
val * precision
Decimal('0.2512121219187974929809570312')
```
but i know the last number 2 is not the end of my number, so i can do following which gives me this result
```
"%.100f" % x
'0.2512121219187974929809570312500000000000000000000000000000000000000000000000000000000000000000000000'
```
however all the trailing zeros are not desired, and the size of %.100f depends on the given bits. It should be simple math and maybe I should give myself a break, but how can I derive the number of internal decimal places from the given number of bits 32. | 2014/11/13 | [
"https://Stackoverflow.com/questions/26909770",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1446071/"
] | From the [documentation](https://docs.python.org/2/library/decimal.html):
>
> the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem
>
>
>
Your number is 29 digits long, so it's just a little too much for the default precision. Try increasing it. 29 would work for this instance, but you may wish to choose a larger number if you don't know how many decimal places you'll need later.
```
from decimal import *
getcontext().prec = 100
bits = 32
precision = Decimal(1) / Decimal(2**bits)
val = Decimal(1078947848)
print val * precision
```
Result:
```
0.25121212191879749298095703125000
```
Also, you can strip those trailing zeroes with a call to `normalize`.
```
>>> print (val * precision).normalize()
0.25121212191879749298095703125
``` | You could take your string representation & eliminate the trailing 0's; what is left are your "internal decimal places", which you can count. | 256 |
27,183,163 | Python 3.4
So maybe it's the turkey digesting, or maybe it's my lack of python wizardry, but my simplistic idea for initializing instances of a class with several members all set to None doesn't seem to be working. To wit:
dataA.txt
```
# layername purpose stmLay stmDat
topside copper 3 5
levelA trace5 6 8
```
shouldWork.py
```
#!C:/Python34
import sys
import re
class LayerDataInn:
def __init__( self, layername, purpose, stmLay, stmDat):
self.layername = layername
self.purpose = purpose
self.stmLay = stmLay
self.stmDat = stmDat
def __init__( self, list_data):
self.layername = list_data[0]
self.purpose = list_data[1]
self.stmLay = list_data[2]
self.stmDat = list_data[3]
def display( self):
print("layername"
" purpose:", self.purpose, \
" stmLay:", self.stmLay, \
" stmDat:", self.stmDat )
def toList( self):
return [ self.layername, \
self.purpose, \
self.stmLay, \
self.stmDat ]
class LayerDataOut:
def __init__( self, layername, purpose, stmLay, stmDat, maskColor):
self.layername = layername
self.purpose = purpose
self.stmLay = stmLay
self.stmDat = stmDat
self.maskColor = maskColor
def __init__( self, list_data):
self.layername = list_data[0]
self.purpose = list_data[1]
self.stmLay = list_data[2]
self.stmDat = list_data[3]
self.maskColor = list_data[4]
def display( self):
print("layername"
" purpose:", self.purpose, \
" stmLay:", self.stmLay, \
" stmDat:", self.stmDat, \
" maskColor:", self.maskColor )
def toList( self):
return [ self.layername, \
self.purpose, \
self.stmLay, \
self.stmDat, \
self.maskColor ]
class LayerDataOutOut( object):
def __init__( self):
self.layername = None
self.purpose = None
self.stmLay = None
self.stmDat = None
self.maskColor = None
def insert( self, *args):
if( len( args) == 2):
self.layername = list_data[0]
self.purpose = list_data[1]
self.stmLay = list_data[2]
self.stmDat = list_data[3]
self.maskColor = list_data[4]
if( len( args) == 6):
self.layername = layername
self.purpose = purpose
self.stmLay = stmLay
self.stmDat = stmDat
self.maskColor = maskColor
def display( self):
print("layername", self.layername, \
" purpose:", self.purpose, \
" stmLay:", self.stmLay, \
" stmDat:", self.stmDat, \
" maskColor:", self.maskColor )
def toList( self):
return [ self.layername, \
self.purpose, \
self.stmLay, \
self.stmDat, \
self.maskColor ]
# read the file
list_layerInn = []
fn_layerInn = "dataA.txt"
with open( fn_layerInn) as fp_layerInn:
for line in fp_layerInn:
list_layerInn.append( LayerDataInn( line.split()))
# list out the file
for objLayerInn in list_layerInn:
objLayerInn.display()
list_layerOut = []
for objLayerInn in list_layerInn:
list_objLayerInn = objLayerInn.toList()
list_objLayerInn.append("woohoo")
list_layerOut.append( LayerDataOut( list_objLayerInn))
# list out the file
for objLayerOut in list_layerOut:
objLayerOut.display()
list_layerOutOut = []
for objLayerInn in list_layerInn:
objLayerOutOut = LayerDataOutOut()
setattr( objLayerOutOut, layername, getattr( objLayerInn, layername)) # <-- dies here
setattr( objLayerOutOut, purpose, getattr( objLayerInn, purpose))
setattr( objLayerOutOut, stmLay, getattr( objLayerInn, stmLay))
setattr( objLayerOutOut, stmDat, getattr( objLayerInn, stmDat))
setattr( objLayerOutOut, maskColor, "wheeee" )
list_layerOutOut.append( objLayerOutOut)
# list out the file
for objLayerOutOut in list_layerOutOut:
objLayerOutOut.display()
```
I would expect that LayerDataOutOut's **init** would add the members with values of None , to be promptly updated with the setattr 's.
Overall goal here is to be able to instantiate an instance of a class with all members accounted for and set to None with just a simple call to the class with no arguments, like Java or C++
TIA,
Still-learning Steve | 2014/11/28 | [
"https://Stackoverflow.com/questions/27183163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1201168/"
] | Finally, I was able to fix the problem. I am posting it for others sake.
I used ssh **-f** user@server ....
this solved my problem.
```
ssh -f root@${server} sh /home/administrator/bin/startServer.sh
``` | I ran into a similar issue using the **Publish Over SSH Plugin**. For some reason Jenkins wasn't stopping after executing the remote script. Ticking the below configuration fixed the problem.
SSH Publishers > Transfers > Advanced > Exec in pty
Hope it helps someone else. | 257 |
44,214,938 | These are the versions that I am working with
```
$ python --version
Python 2.7.10
$ pip --version
pip 9.0.1 from /Library/Python/2.7/site-packages (python 2.7)
```
Ideally I should be able to install tweepy. But that is not happening.
```
$ pip install tweepy
Collecting tweepy
Using cached tweepy-3.5.0-py2.py3-none-any.whl
Collecting six>=1.7.3 (from tweepy)
Using cached six-1.10.0-py2.py3-none-any.whl
Requirement already satisfied: requests>=2.4.3 in /Library/Python/2.7/site-packages (from tweepy)
Requirement already satisfied: requests-oauthlib>=0.4.1 in /Library/Python/2.7/site-packages (from tweepy)
Requirement already satisfied: oauthlib>=0.6.2 in /Library/Python/2.7/site-packages (from requests-oauthlib>=0.4.1->tweepy)
Installing collected packages: six, tweepy
Found existing installation: six 1.4.1
DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling six-1.4.1:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
```
A bunch of lines deleted for brevity. It finally ends at ...
```
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/tmp/pip-CBvMLu-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
```
Can anyone help?
**Update** Tried the following as well. But did not solve the problem
```
$ sudo -H pip install tweepy
``` | 2017/05/27 | [
"https://Stackoverflow.com/questions/44214938",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6193290/"
] | Install it with:
```
sudo pip install tweepy
```
Looks like a permission problem :) | I got the same problem. The way I solved it was to download python 2.7.13 from the official website and install it. After that, I installed pip with:
```
sudo easy_install pip
```
And after that:
```
pip install tweepy
```
Hope it is still relevant :) | 259 |
32,531,858 | Assume you have a list :
```
mylist=[[1,2,3,4],[2,3,4,5],[3,4,5,6]]
```
any pythonic(2.x) way to unpack the inner lists so that new list should look like ?:
```
mylist_n=[1,2,3,4,2,3,4,5,3,4,5,6]
``` | 2015/09/11 | [
"https://Stackoverflow.com/questions/32531858",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2516297/"
] | I think that due the fact that you are setting the text of the button as follows:
```
<asp:LinkButton ID="click_download" runat="server" OnClick="download"><%# Eval("title") %></asp:LinkButton>
```
The `Text` property is not being set correctly. Move the `<%# Eval("title") %>` into the declaration of link button and assign it's value to the `Text` property:
```
<asp:LinkButton ID="click_download" runat="server" OnClick="download" Text='<%# DataBinder.Eval (Container.DataItem, "title") %>'></asp:LinkButton>
``` | I don't see where you are setting the text property/attribute for the LinkButton. However, I do see where you have "<%# Eval("title") %>" floating in your tag. Should it say Text="<%# Eval("title") %>".
I really don't understand how it is being viewed if it's not set. Are you setting it in the Page\_Load? Hopefully these questions help chase down the problem. | 260 |
19,325,907 | I am working on my first Django website and am having a problem. Whenever I attempt to go on the admin page www.example.com/admin I encounter a 404 page. When I attempt to go on the admin site on my computer using `python manage.py runserver` it works. What info do you guys need to help me to fix my problem?
`url.py`
```
from django.conf.urls import patterns, include, url`
from django.contrib.staticfiles.urls import staticfiles_urlpatterns
from django.contrib import admin
from django.http import HttpResponseRedirect
admin.autodiscover()
urlpatterns = patterns('',
url(r'^admin/', include(admin.site.urls)),
url(r'^admin/doc/', include('django.contrib.admindocs.urls')),
....
```
``` | 2013/10/11 | [
"https://Stackoverflow.com/questions/19325907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2366105/"
] | You must include the Django admin in your `INSTALLED_APPS` in the settings file (it's probably already there, but commented out).
You will also need to configure the URLs for the admin site, which should be in your site-wide urls.py, again, probably commented out but there.
If you have already done both of these things, please share your urls.py from the project itself. | `python manage.py runserver` enables your application to run locally. You need to deploy your application using WSGI and Apache to access your page from other remove machines.
Refer to the configuration details <https://docs.djangoproject.com/en/1.2/howto/deployment/modwsgi/> | 261 |
34,092,850 | I'm trying to apply the expert portion of the tutorial to my own data but I keep running into dimension errors. Here's the code leading up to the error.
```
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
W_conv1 = weight_variable([1, 8, 1, 4])
b_conv1 = bias_variable([4])
x_image = tf.reshape(tf_in, [-1,2,8,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
```
And then when I try to run this command:
```
W_conv2 = weight_variable([1, 4, 4, 8])
b_conv2 = bias_variable([8])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
```
I get the following errors:
```
ValueError Traceback (most recent call last)
<ipython-input-41-7ab0d7765f8c> in <module>()
3
4 h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
----> 5 h_pool2 = max_pool_2x2(h_conv2)
ValueError: ('filter must not be larger than the input: ', 'Filter: [', Dimension(2), 'x', Dimension(2), '] ', 'Input: [', Dimension(1), 'x', Dimension(4), '] ')
```
Just for some background information, the data that I'm dealing with is a CSV file where each row contains 10 features and 1 empty column that can be a 1 or a 0. What I'm trying to get is a probability in the empty column that the column will equal a 1. | 2015/12/04 | [
"https://Stackoverflow.com/questions/34092850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3849791/"
] | You have to shape the input so it is compatible with both the training tensor and the output. If you input is length 1, your output should be length 1 (length is substituted for dimension).
When you're dealing with-
```
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 1, 1, 1],
strides=[1, 1, 1, 1], padding='SAME')
```
Notice how I changed the strides and the ksize to `[1, 1, 1, 1]`. This will match an output to a 1 dimensional input and prevent errors down the road.
When you're defining your weight variable (see code below)-
```
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
```
you're going to have to make the first 2 numbers conform to the feature tensor that you are using to train your model, the last two numbers will be the dimension of the predicted output (same as the dimension of the input).
```
W_conv1 = weight_variable([1, 10, 1, 1])
b_conv1 = bias_variable([1])
```
Notice the `[1, 10,` in the beginning which signifies that the feature tensor is going to be a 1x10 feature tensor; the last two numbers `1, 1]` correspond to the dimensions of the input and output tensors/predictors.
When you reshape your x\_foo tensor (I call it x\_ [x prime]), you, for whatever reason, have to define it like so-
```
x_ = tf.reshape(x, [-1,1,10,1])
```
Notice the 1 and 10 in the middle- `...1,10,...`. Once again, these numbers correspond to the dimension of your feature tensor.
For every bias variable, you choose the final number of the previously defined variable. For example, if `W_conv1 = weight_variable([1, 10, 1, 1])` appears like so, you take the final number and put that into your bias variable so it can match the dimensions of the input. This is done like so- `b_conv1 = bias_variable([1])`.
If you need any more explanation please comment below. | The dimensions you are using for the filter are not matching the output of the hidden layer.
Let me see if I understood you: your input is composed of 8 features, and you want to reshape it into a 2x4 matrix, right?
The weights you created with `weight_variable([1, 8, 1, 4])` expect a 1x8 input, in one channel, and produce a 1x8 output in 4 channels (or hidden units). The filter you are using sweeps the input in 2x2 squares. However, since the result of the weights is 1x8, they won't match.
You should reshape the input as
```
x_image = tf.reshape(tf_in, [-1,2,4,1])
```
Now, your input is actually 2x4 instead of 1x8. Then you need to change the weight shape to `(2, 4, 1, hidden_units)` to deal with a 2x4 output. It will also produce a 2x4 output, and the 2x2 filter now can be applied.
After that, the filter will match the output of the weights. Also note that you will have to change the shape of your second weight matrix to `weight_variable([2, 4, hidden_units, hidden2_units])` | 262 |
22,767,444 | I have the following xml file:
```
<root>
<article_date>09/09/2013
<article_time>1
<article_name>aaa1</article_name>
<article_link>1aaaaaaa</article_link>
</article_time>
<article_time>0
<article_name>aaa2</article_name>
<article_link>2aaaaaaa</article_link>
</article_time>
<article_time>1
<article_name>aaa3</article_name>
<article_link>3aaaaaaa</article_link>
</article_time>
<article_time>0
<article_name>aaa4</article_name>
<article_link>4aaaaaaa</article_link>
</article_time>
<article_time>1
<article_name>aaa5</article_name>
<article_link>5aaaaaaa</article_link>
</article_time>
</article_date>
</root>
```
I would like to transform it to the following file:
```
<root>
<article_date>09/09/2013
<article_time>1
<article_name>aaa1+aaa3+aaa5</article_name>
<article_link>1aaaaaaa+3aaaaaaa+5aaaaaaa</article_link>
</article_time>
<article_time>0
<article_name>aaa2+aaa4</article_name>
<article_link>2aaaaaaa+4aaaaaaa</article_link>
</article_time>
</root>
```
How can I do it in python?
My approach to do this task is the following:
1) loop through tags
2) form dictionary key- either 0 or 1, value -
3) for each element in this dictionary find all child nodes: and and append them
Since that, I wrote the following code to implement this (ps I am currently struggling with adding elements to the dictionary, but I will overcome this issue):
```
def parse():
list_of_inique_timestamps=[]
text_to_merge=""
tree=et.parse("~/Documents/test1.xml")
root=tree.getroot()
for children in root:
print children.tag, children.text
for child in children:
print (child.tag,int(child.text))
if not child.text in list_of_inique_timestamps:
list_of_inique_timestamps.append(child.text)
print list_of_inique_timestamps
``` | 2014/03/31 | [
"https://Stackoverflow.com/questions/22767444",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1146365/"
] | Here's the solution using `xml.etree.ElementTree` from python standard library.
The idea is to gather items into `defaultdict(list)` per `article_time` text value:
```
from collections import defaultdict
import xml.etree.ElementTree as ET
data = """<root>
<article_date>09/09/2013
<article_time>1
<article_name>aaa1</article_name>
<article_link>1aaaaaaa</article_link>
</article_time>
<article_time>0
<article_name>aaa2</article_name>
<article_link>2aaaaaaa</article_link>
</article_time>
<article_time>1
<article_name>aaa3</article_name>
<article_link>3aaaaaaa</article_link>
</article_time>
<article_time>0
<article_name>aaa4</article_name>
<article_link>4aaaaaaa</article_link>
</article_time>
<article_time>1
<article_name>aaa5</article_name>
<article_link>5aaaaaaa</article_link>
</article_time>
</article_date>
</root>
"""
tree = ET.fromstring(data)
root = ET.Element('root')
article_date = ET.SubElement(root, 'article_date')
article_date.text = tree.find('.//article_date').text
data = defaultdict(list)
for article_time in tree.findall('.//article_time'):
text = article_time.text.strip()
name = article_time.find('./article_name').text
link = article_time.find('./article_link').text
data[text].append((name, link))
for time_value, items in data.iteritems():
article_time = ET.SubElement(article_date, 'article_time')
article_name = ET.SubElement(article_time, 'article_name')
article_link = ET.SubElement(article_time, 'article_name')
article_time.text = time_value
article_name.text = '+'.join(name for (name, _) in items)
article_link.text = '+'.join(link for (_, link) in items)
print ET.tostring(root)
```
prints (prettified):
```
<root>
<article_date>09/09/2013
<article_time>1
<article_name>aaa1+aaa3+aaa5</article_name>
<article_name>1aaaaaaa+3aaaaaaa+5aaaaaaa</article_name>
</article_time>
<article_time>0
<article_name>aaa2+aaa4</article_name>
<article_name>2aaaaaaa+4aaaaaaa</article_name>
</article_time>
</article_date>
</root>
```
See, the result is exactly what you were aiming to. | I'll write as much as I have time (and knowledge), but I'm making this a community wiki so other folks can help.
I would suggest using [xml](https://docs.python.org/2/library/xml.etree.elementtree.html) or [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/bs4/doc/) libraries for this. I'll use BeautifulSoup because I can't get xml to work for some reason right now.
First, let's get set up:
```
>>> import bs4
>>> soup = bs4.BeautifulSoup('''<root>
... <article_date>09/09/2013
... <article_time>1
... <article_name>aaa1</article_name>
... <article_link>1aaaaaaa</article_link>
... </article_time>
... <article_time>0
... <article_name>aaa2</article_name>
... <article_link>2aaaaaaa</article_link>
... </article_time>
... <article_time>1
... <article_name>aaa3</article_name>
... <article_link>3aaaaaaa</article_link>
... </article_time>
... <article_time>0
... <article_name>aaa4</article_name>
... <article_link>4aaaaaaa</article_link>
... </article_time>
... <article_time>1
... <article_name>aaa5</article_name>
... <article_link>5aaaaaaa</article_link>
... </article_time>
... </root>''')
```
This just produces an internal representation of your xml. We can use the `find_all` method to grab all the article times.
```
>>> children = soup.find_all('article_time')
>>> children
[<article_time>1
<article_name>aaa1</article_name>
<article_link>1aaaaaaa</article_link>
</article_time>, <article_time>0
<article_name>aaa2</article_name>
<article_link>2aaaaaaa</article_link>
</article_time>, <article_time>1
<article_name>aaa3</article_name>
<article_link>3aaaaaaa</article_link>
</article_time>, <article_time>0
<article_name>aaa4</article_name>
<article_link>4aaaaaaa</article_link>
</article_time>, <article_time>1
<article_name>aaa5</article_name>
<article_link>5aaaaaaa</article_link>
</article_time>]
```
The next thing to do is define a key for how we define 'similar' parent nodes. Let's write a `key` function that specifies which part of each child to look at. We'll do some poking around to learn about the structure of each child first.
```
>>> children[0].contents
[u'1\n ', <article_name>aaa1</article_name>, u'\n', <article_link>1aaaaaaa</article_link>, u'\n']
>>> children[0].contents[0]
u'1\n '
>>> int(children[0].contents[0])
1
>>> def key(child):
... return int(child.contents[0])
...
>>> key(children[0])
1
>>> key(children[1])
0
```
Okay. Now we can take advantage of python's [itertools.groupby](https://docs.python.org/2/library/itertools.html#itertools.groupby) function, which will group together all the children with the same key (we need to sort first). We will use the newly defined `key` function to specify how to sort, and what defines a group.
```
>>> children = sorted(children, key=key)
>>> import itertools
>>> groups = itertools.groupby(children, key)
```
`groups` is a generator -- like a list, but we can only iterate through it once. Let's take a look at what makes it up, even though that will mean we have to recreate it later. (We only get a single pass for generators, so by looking at the data, we're losing it. Luckily, it's pretty easy to recreate)
```
>>> for k, g in groups:
... print k, ':\t', list(g)
...
0 : [<article_time>0
<article_name>aaa2</article_name>
<article_link>2aaaaaaa</article_link>
</article_time>, <article_time>0
<article_name>aaa4</article_name>
<article_link>4aaaaaaa</article_link>
</article_time>]
1 : [<article_time>1
<article_name>aaa1</article_name>
<article_link>1aaaaaaa</article_link>
</article_time>, <article_time>1
<article_name>aaa3</article_name>
<article_link>3aaaaaaa</article_link>
</article_time>, <article_time>1
<article_name>aaa5</article_name>
<article_link>5aaaaaaa</article_link>
</article_time>]
```
Okay, so `k` specifies what key was used to produce the group, and g is a sequence of the `article_time`s that matched `k`.
Sorry, that's all I have time for at the moment. Hopefully this is enough to get you started. | 263 |
60,959,871 | **The problem**:
I have a 3-D Numpy Array:
`X`
`X.shape: (1797, 2, 500)`
```
z=X[..., -1]
print(len(z))
print(z.shape)
count = 0
for bot in z:
print(bot)
count+=1
if count == 3: break
```
Above code yields following output:
```
1797
(1797, 2)
[23.293915 36.37388 ]
[21.594519 32.874397]
[27.29872 26.798382]
```
So, there are 1797 data points - each with a X and a Y coordinate
and, there are 500 iterations of these 1797 points.
I want a DataFrame such that:
```
Index Column | X-coordinate | Y-coordinate
0 | X[0][0][0] | X[0][1][0]
0 | X[1][0][0] | X[1][1][0]
0 | X[2][0][0] | X[2][1][0]
('0') 1797 times
1 | X[0][0][1] | X[0][1][1]
1 | X[1][0][1] | X[1][1][1]
1 | X[2][0][1] | X[2][1][1]
('1' 1797 times)
.
.
.
and so on
till 500
```
I tried techniques mentioned here, but numpy/pandas is really escaping me:
1. [How To Convert a 3D Array To a Dataframe](https://stackoverflow.com/questions/52195426/how-to-convert-a-3d-array-to-a-dataframe)
2. [How to transform a 3d arrays into a dataframe in python](https://stackoverflow.com/questions/35525028/how-to-transform-a-3d-arrays-into-a-dataframe-in-python)
3. [Convert numpy array to pandas dataframe](https://stackoverflow.com/questions/50624046/convert-numpy-array-to-pandas-dataframe)
4. [easy multidimensional numpy ndarray to pandas dataframe method?](https://stackoverflow.com/questions/36853594/easy-multidimensional-numpy-ndarray-to-pandas-dataframe-method)
5. [numpy rollaxis - how exactly does it work?](https://stackoverflow.com/questions/22583792/numpy-rollaxis-how-exactly-does-it-work)
Please help me out.
Hope I am adhering to the question-asking discipline. | 2020/03/31 | [
"https://Stackoverflow.com/questions/60959871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7890913/"
] | Here's a solution with sample data:
```
a,b,c = X.shape
# in your case
# a,b,c = 1797, 500
pd.DataFrame(X.transpose(1,2,0).reshape(2,-1).T,
index=np.repeat(np.arange(c),a),
columns=['X_coord','Y_coord']
)
```
Output:
```
X_coord Y_coord
0 0 3
0 6 9
0 12 15
0 18 21
1 1 4
1 7 10
1 13 16
1 19 22
2 2 5
2 8 11
2 14 17
2 20 23
``` | Try this way:
```
index = np.concatenate([np.repeat([i], 1797) for i in range(500)])
df = pd.DataFrame(index=index)
df['X-coordinate'] = X[:, 0, :].T.reshape((-1))
df['Y-coordinate'] = X[:, 1, :].T.reshape((-1))
``` | 264 |
35,475,519 | I am facing problem in returned image url, which is not proper.
My return image url is `"http://127.0.0.1:8000/showimage/6/E%3A/workspace/tutorial_2/media/Capture1.PNG"`
But i need
```
"http://127.0.0.1:8000/media/Capture1.PNG"
```
[![enter image description here](https://i.stack.imgur.com/wZSwF.png)](https://i.stack.imgur.com/wZSwF.png)
When i click on `image_url` then image open in new browser tab
But currently its shown error:
[![enter image description here](https://i.stack.imgur.com/kMDtX.png)](https://i.stack.imgur.com/kMDtX.png)
**view.py**
```
from showimage.models import ShowImage
from showimage.serializers import ShowImageSerializer
from rest_framework import generics
# Create your views here.
class ShowImageList(generics.ListCreateAPIView):
queryset = ShowImage.objects.all()
serializer_class = ShowImageSerializer
class ShowImageDetail(generics.RetrieveUpdateDestroyAPIView):
queryset = ShowImage.objects.all()
serializer_class = ShowImageSerializer
```
**model.py**
```
from __future__ import unicode_literals
from django.db import models
from django.conf import settings
# Create your models here.
class ShowImage(models.Model):
image_name = models.CharField(max_length=255)
image_url = models.ImageField(upload_to=settings.MEDIA)
```
**serializer.py**
```
from rest_framework import serializers
from showimage.models import ShowImage
class ShowImageSerializer (serializers.ModelSerializer):
class Meta:
model = ShowImage
fields = ('id', 'image_name', 'image_url')
```
**settings.py**
```
MEDIA=os.path.join(BASE_DIR, "media")
```
**urls.py**
```
from django.conf.urls import url, include
from django.contrib import admin
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^showimage/', include('showimage.urls')),
]
```
I am new in python and also in django-rest-framework.
Please also tell me how we extend models or serialize class | 2016/02/18 | [
"https://Stackoverflow.com/questions/35475519",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3526079/"
] | You might want to try this in your settings:
```
MEDIA_URL = '/media/'
MEDIA_ROOT=os.path.join(BASE_DIR, "media")
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^showimage/', include('showimage.urls')),
]
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
```
And in your models:
```
class ShowImage(models.Model):
image_name = models.CharField(max_length=255)
image_url = models.ImageField(upload_to="") # or upload_to="images", which would result in your images being at "http://127.0.0.1:8000/media/images/Capture1.PNG"
``` | Your code seems correct except one thing you have passed settings.MEDIA in uploads image. you don't need to pass settings.MEDIA in uploads.
try this
```
image_url = models.ImageField(upload_to='Dir_name')
```
Dir\_name will create when you'll run script. | 265 |
45,317,050 | How do I find if a string has atleast 3 alpha numeric characters in python. I'm using regex as `"^.*[a-zA-Z0-9]{3, }.*$"`, but it throws error message everytime.
My example string: a&b#cdg1. P
lease let me know. | 2017/07/26 | [
"https://Stackoverflow.com/questions/45317050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8341662/"
] | like this
```
#include <stdio.h>
#include <stdarg.h>
typedef enum rule {
first, total
} Rule;
int fund(Rule rule, int v1, ...){
switch(rule){
case total:
{
int total = v1, value;
if(v1 == -1) return 0;
va_list ap;
va_start(ap, v1);
value = va_arg(ap, int);
while(value != -1){
total += value;
value = va_arg(ap, int);
}
va_end(ap);
return total;
}
break;
case first:
return v1;
}
return -1;
}
int main(void){
printf("first:%d\n", fund(first, 1, 2, 3, 4, -1));//first:1
printf("total:%d\n", fund(total, 7, 5, 3, 1, -1));//total:16
}
``` | You mentioned that the end of your arguments is marked by a `-1`. This means you can keep getting more arguments until you get a `-1`.
Following is the way you can do it using `va_list` -
```
if(rule == TYPE) {
int total = 0;
va_list args;
va_start(args, rule);
int j;
while(1){
j = va_arg(args, int);
if(j!=-1)
total += j;
else
break;
}
va_end(args);
return total;
}
```
You mentioned in the comments that your prototype is
```
int choose(Rule rule, int v1, ...);
```
In that case you need the modifications -
At the very top
```
if(v1 == -1)
return 0;
```
And
```
int total = v1;
va_list args;
va_start(args, v1);
```
Demo [Here](https://ideone.com/rWdgtb) | 268 |
51,411,655 | So I'm trying to Dockerize my project which looks like this:
```
project/
main.go
package1/
package2/
package3/
```
And it also requires some outside packages such as github.com/gorilla/mux
Note my project is internal on a github.company.com domain so I'm not sure if that matters.
So here's my Dockerfile and yes, my GOPATH and GOROOT is set and PLEASE don't just tell me to read <https://golang.org/doc/code.html>. I have and am still am having this issue.
```
### STAGE 1: Build ###
FROM golang:1.10 as builder
WORKDIR /go/src/github.company.com/project-repo/project
COPY . .
RUN go get
RUN go install <- ERROR HERE
RUN CGO_ENABLED=0 go build -ldflags="-s -w" -o executable -a -installsuffix cgo .
### STAGE 2: Setup ###
FROM python:3.6-alpine
COPY --from=builder /go/src/github.company.com/project-repo/project/executable /api/executable
CMD ["/api/executable"]
```
Then I run:
```
docker build -t myapp .
```
And get this error:
```
main.go: cannot find package github.company.com/project-repo/project/package1 in any of:
/usr/local/go/src/github.company.com/project-repo/project/package1 (from $GOROOT)
/go/src/github.company.com/project-repo/project/package1 (from $GOPATH)
```
And keep in mind those paths are correct. Why can't go install packages that are within itself?? Main.go imports package1, but for sure reason "go install" doesn't install packages inside itself.. | 2018/07/18 | [
"https://Stackoverflow.com/questions/51411655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4062625/"
] | Wow, golang really is picky about paths! It was just that I had assigned my working directory to the wrong place. There was another file in the tree:
```
WORKDIR /go/src/github.company.com/COMPANY/project-repo/project
``` | did you make(`mkdir`) the `WORKDIR` before setting its value? | 269 |
29,988,923 | What is the best way to downgrade icu4c from 55.1 to 54.1 on Mac OS X Mavericks.
I tried `brew switch icu4c 54.1` and failed.
**Reason to switch back to 54.1**
I am trying to setup and use Mapnik.
I was able to install Mapnik from homebrew - `brew install mapnik`
But, I get the following error when I try to `import mapnik` in python
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/mapnik/__init__.py", line 69, in <module>
from _mapnik import *
ImportError: dlopen(/usr/local/lib/python2.7/site-packages/mapnik/_mapnik.so, 2): Library not loaded: /usr/local/opt/icu4c/lib/libicuuc.54.dylib
Referenced from: /usr/local/Cellar/mapnik/2.2.0_5/lib/libmapnik.dylib
Reason: image not found`
Python version on my Mac - Python 2.7.5 (default, Mar 9 2014, 22:15:05)
Is switching icu4c back to 54.1 way to go?
Or, Am I missing something?
Thanks for the help in advance. | 2015/05/01 | [
"https://Stackoverflow.com/questions/29988923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3983957/"
] | This was Homebrew's fault and should be fixed after `brew update && brew upgrade mapnik`; sorry! | I had the same problem but using Yosemite but I guess it should be fairly the same. I am not sure this is the best way to do it but it worked for me.
I tried `brew switch icu4c 54.1` but failed since I did not have that package in the Cellar.
My solution was getting ici4c 54.1 in the Cellar.
First check if you have the .tar file cached to do this look in `/Library/Caches/Homebrew` for `icu4c-54.1.mavericks.bottle.1.tar.gz`
If it is there you can just extract it and put the 54.1 folder in `/usr/local/Cellar/icu4c/` folder then run the `brew switch icu4c 54.1` command.
If you don't have the library cached you can download it from here
<https://downloads.sf.net/project/machomebrew/Bottles/icu4c-54.1.mavericks.bottle.1.tar.gz> and the same goes for putting the 54.1 folder in the correct place in the cellar and running brew switch. | 270 |
5,681,330 | I typically write both unittests and doctests in my modules. I'd like to automatically run all of my doctests when running the test suite. I think this is possible, but I'm having a hard time with the syntax.
I have the test suite
```
import unittest
class ts(unittest.TestCase):
def test_null(self): self.assertTrue(True)
if __name__ == '__main__': unittest.main()
```
I'd like to add to this suite all of the doctests in module `module1`. How can I do this? I've read the [python docs](http://docs.python.org/library/doctest.html#unittest-api), but I'm not any closer to success, here. Adding the lines
```
import doctest
import module1
suite = doctest.DocTestSuite(module1)
```
doesn't work. `unittest.main()` searches through the current file scope and runs every test **case** it finds, right? But DocTestSuite produces a test **suite**. How do I get `unittest.main()` to run the additional cases in the suite? Or am I just confused and deluded??
Once again, I'd be grateful for any help anyone can offer. | 2011/04/15 | [
"https://Stackoverflow.com/questions/5681330",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/489588/"
] | An update to this old question: since Python version 2.7 there is the [load\_tests protocol](https://docs.python.org/2/library/unittest.html#load-tests-protocol) and there is no longer a need to write custom code. It allows you to add a function `load_tests()`, which a test loader will execute to update its collection of unit tests for the current module.
Put a function like this in your code module to package the module's own doctests into a test suite for `unittest`:
```
def load_tests(loader, tests, ignore):
tests.addTests(doctest.DocTestSuite())
return tests
```
Or, put a function like this into your unit test module to add the doctests from another module (for example, `package.code_module`) into the tests suite which is already there:
```
def load_tests(loader, tests, ignore):
tests.addTests(doctest.DocTestSuite(package.code_module))
return tests
```
When `unittest.TestLoader` methods `loadTestsFromModule()`, `loadTestsFromName()` or `discover()` are used unittest uses a test suite including both unit tests and doctests. | First I tried accepted answer from Andrey, but at least when running in Python 3.10 and `python -m unittest discover` it has led to running the test from unittest twice. Then I tried to simplify it and use `load_tests` and to my surprise it worked very well:
So just write both `load_tests` and normal `unittest` tests in a single file and it works!
```py
import doctest
import unittest
import my_module_with_doctests
class ts(unittest.TestCase):
def test_null(self):
self.assertTrue(False)
# No need in any other extra code here
# Load doctests as unittest, see https://docs.python.org/3/library/doctest.html#unittest-api
def load_tests(loader, tests, ignore):
tests.addTests(doctest.DocTestSuite(my_module_with_doctests))
return tests
``` | 271 |
68,640,124 | I'm super new to praat parselmouth in python and I am a big fan, as it enables analyzes without Praat.
So my struggle is, that I need formants in a specific sampling rate but I cant change it here.
If I change the time\_step (and also time window), length of the formant list is not changing. I am mainly using this code: # <http://blog.syntheticspeech.de/2021/03/10/how-to-extract-formant-tracks-with-praat-and-python/>
and it looks like that
```
f0min= 75
f0max=300
pointProcess = praat.call(sound, "To PointProcess (periodic, cc)", f0min, f0max)
time_step = 0.01 # or 0.002 see picture
max_formant_num = 5
max_formant_freq = 5000 # men 5000, women 5500
window_length = 0.01 # or 0.002 see picture
preemphasis = 50
formants = praat.call(sound, "To Formant (burg)", time_step, max_formant_num, max_formant_freq, window_length, preemphasis)
numPoints = praat.call(pointProcess, "Get number of points")
print(numPoints)
f1_list = []
f2_list = []
f3_list = []
for point in range(0, numPoints):
point += 1
t = praat.call(pointProcess, "Get time from index", point)
f1 = praat.call(formants, "Get value at time", 1, t, 'Hertz', 'Linear')
f2 = praat.call(formants, "Get value at time", 2, t, 'Hertz', 'Linear')
f3 = praat.call(formants, "Get value at time", 3, t, 'Hertz', 'Linear')
f1_list.append(f1)
f2_list.append(f2)
f3_list.append(f3)
```
I can not get the sample rate I'd like (eg 30 Hz). Can someone help?
[here I am plotting f1 for both time\_steps, but it is still the same length (323) and timepoints](https://i.stack.imgur.com/QOJia.png) | 2021/08/03 | [
"https://Stackoverflow.com/questions/68640124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11662481/"
] | In `base R`you can use `sub` and backreference `\\1`:
```
sub("(\\d+:\\d+:\\d+\\.\\d+).*", "\\1", x)
[1] "13:30:00.827" "13:30:01.834"
```
or:
```
sub("(.*?)(: <-.*)", "\\1", x)
```
In both cases you divide the string into two capturing groups, the first of which you remember in `sub`s replacement argument.
In `stringr` you can use `str_extract` and positive lookahead `(?=...)`:
```
library(stringr)
str_extract(x, ".*(?=: <-)")
```
Here you extract that substring that occurs right before the substring `: <-`
Data:
```
x <- c("13:30:00.827: <- $HCHDG", "13:30:01.834: <- $HCHDG")
``` | This is what you should use:
```
sub(': <- \\$HCHDG', '', dataframe$ColName)
``` | 281 |
1,780,618 | Ok so I have the same python code locally and in the gae cloud.
when I store an entity locally, the ListProperty field of set element type datetime.datetime looks like so in the Datastore Viewer:
```
2009-01-01 00:00:00,2010-03-10 00:00:00
```
when I store same on the cloud, the viewer displays:
```
[datetime.datetime(2009, 1, 1, 0, 0), datetime.datetime(2010, 3, 9, 0, 0)]
```
why the different representation?
This wouldn't bother me, only when I query on this field on the cloud the query fails to find the matched entity (it should and it does locally) - leading me to believe it's this differing representation that is causing the trouble. I should repeat - the code is identical.
Anyone think of a reason why this is happening and a solution to it?
UPDATE:
my query is as follows (using filters):
```
from x import y
from datetime import datetime
from google.appengine.ext import db
q = y.EntityType.all().filter('displayDateRange <=',datetime.now()).filter('displayDateRange >=',datetime.now())
usersResult = q.fetch(100)
print `len(usersResult)`
```
result should be 1, instead it's 0.
Actually it's just the ListProperty with specified value datetime.datetime that is the issue - queries on the StringListProperty is working as expected on the cloud.
I tried the raw filter via interactive console on both local and cloud and cloud gives me no results. So it is a datastore thing, I'm assuming it *must* have something to do with the storage format - I only have one entity value in both datastores with the ListProperty looking like:
```
2009-01-01 00:00:00,2010-03-09 00:00:00
[datetime.datetime(2009, 1, 1, 0, 0), datetime.datetime(2010, 3, 9, 0, 0)]
```
on local and cloud respectively.
Any ideas?
**Further Update**
Replaced the datetime.now() with hardcoded datetime obj - example filter now looks like:
```
y.EntityType.all().filter('displayDateRange <=',datetime(2009,11,24)).filter('displayDateRange >=',datetime(2009,11,24))
```
Note with the above datetime ListProperty range from 1.1.2009 to 3.9.2010 this should return the above entity - I tried this identical filter on localhost dev server and it did so. The cloud, with it's different representation of the datetime.datetime ListProperty, does not.
Note this is taken from the [current best practice for filtering on date range](http://appengine-cookbook.appspot.com/attachment/?id=ahJhcHBlbmdpbmUtY29va2Jvb2tyEQsSCkF0dGFjaG1lbnQY0ygM)
Any ideas what could be wrong? | 2009/11/23 | [
"https://Stackoverflow.com/questions/1780618",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/178511/"
] | Ok long story short: it's now classed as a bug in the app engine dev server version and is no longer supported in the production cloud datastore.
Filled out a further explanation in a [blog post](http://aleatory.clientsideweb.net/2009/11/28/google-app-engine-datastore-gotchas/), check out point 3. | The problem your see is clearly a conversion to string (calling `__str__` or `__unicode__`) in the local case, while the representation (repr) of your data is displayed on the cloud. But this difference in printing out the results should not be the cause of your failed query on the cloud.
What is your exact query?
**UPDATE** after knowing the query:
I don't really understand why do you use these filter conditions:
```
.filter('displayDateRange <=',datetime.now()).filter('displayDateRange >=',datetime.now())
```
There are two problems with this:
* You call `datetime.now()` twice, which can give you different results, which would result in an empty result set. It is especially true on a loaded server with multiple threads/processes of execution active at the same time.
* What you might intended to do with the above pair of filters is checking for equality. But it won't work if the precision of the datetime instance returned by `datetime.now()` and the precision of the datetime stored in the database differs. It is not a good idea to check for equality in the case of floating point numbers and sub-second precision time values in general.
What do you want to achieve with such a pair of filter conditions? | 285 |
55,653,169 | I am trying to write some code in python to retrieve some data from Infoblox. To do this i need to Import the Infoblox Module.
Can anyone tell me how to do this ? | 2019/04/12 | [
"https://Stackoverflow.com/questions/55653169",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11351903/"
] | Try this
```
path_to_directory="./"
files = [arff for arff in os.listdir(path_to_directory) if arff.endswith(".arff")]
def toCsv(content):
data = False
header = ""
newContent = []
for line in content:
if not data:
if "@attribute" in line:
attri = line.split()
columnName = attri[attri.index("@attribute")+1]
header = header + columnName + ","
elif "@data" in line:
data = True
header = header[:-1]
header += '\n'
newContent.append(header)
else:
newContent.append(line)
return newContent
# Main loop for reading and writing files
for zzzz,file in enumerate(files):
with open(path_to_directory+file , "r") as inFile:
content = inFile.readlines()
name,ext = os.path.splitext(inFile.name)
new = toCsv(content)
with open(name+".csv", "w") as outFile:
outFile.writelines(new)
``` | Take a look at the error trace
>
> UnicodeEncodeError: 'ascii' codec can't encode character '\xf3' in position 4: ordinal not in range(128)
>
>
>
Your error suggests you have some encoding problem with the file. Consider first opening the file with the correct encoding and then loading it to the arff loader
```
import codecs
import arff
file_ = codecs.load('LMD.rh.arff', 'rb', 'utf-8') # or whatever encoding you have
arff.load(file_) # now this should be fine
```
For reference see [here](https://pythonhosted.org/liac-arff/#unicode) | 286 |
12,391,377 | Python [supports chained comparisons](http://docs.python.org/reference/expressions.html#not-in): `1 < 2 < 3` translates to `(1 < 2) and (2 < 3)`.
I am trying to make an SQL query using SQLAlchemy which looks like this:
```
results = session.query(Couple).filter(10 < Couple.NumOfResults < 20).all()
```
The results I got were not as expected. I've turned the engine's `echo=True` keyword, and indeed - the generated SQL query only included one of the two comparisons.
I can't find any documentation that explicitly says this is forbidden. I assumed that if this type of expression is supported in Python, it should be supported in SQLAlchemy as well.
Why doesn't this work? I have one possible solution in mind (shared in answers), but will be glad to hear other opinions. | 2012/09/12 | [
"https://Stackoverflow.com/questions/12391377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/388334/"
] | The reason is that Python actually evaluates something akin to this:
```
_tmp = Couple.NumOfResults
(10 < _tmp and _tmp < 20)
```
The `and` operator is unsupported in SQLAlchemy (one should use `and_` instead). And thus - chained comparisons are not allowed in SQLAlchemy.
In the original example, one should write this code instead:
```
results = session.query(Couple).filter(and_(10 < Couple.NumOfResults,
Couple.NumOfResults < 20)).all()
``` | SQLAlchemy won't support Python's chained comparisons. Here is the official reason why from author Michael Bayer:
>
> unfortunately this is likely impossible from a python perspective. The mechanism of "x < y < z" relies upon the return value of the two individual expressions. a SQLA expression such as "column < 5" returns a BinaryExpression object, which evaluates as True - therefore the second expression is never called and we are never given a chance to detect the chain of expressions. Furthermore, the chain of expressions would need to be detected and converted to BETWEEN, since SQL doesn't support the chained comparison operators.
> Not including the detection of chains->BETWEEN part, to make this work would require manipulation of the BinaryExpression object's `__nonzero__()` value based on the direction of the comparison operator, so as to force both comparisons. Adding a basic `__nonzero__()` to BinaryExpression that returns False illustrates that it's tolerated pretty poorly by the current codebase, and at the very least many dozens of "if x:" kinds of checks would need to be converted to "if x is None:", but there might be further issues that are more difficult to resolve. For the outside world it might wreak havoc.
> Given that the appropriate SQL operator here is BETWEEN which is easily accessible from the between operator, I don't think the level of bending over backwards and confusing people is worth it so this a "wontfix".
>
>
>
See details at:
<https://bitbucket.org/zzzeek/sqlalchemy/issues/1394/sql-expressions-dont-support-x-col-y> | 287 |
28,260,652 | New to python, my assignment asks to ask user for input and then find and print the first letter of each word in the sentence
so far all I have is
```
phrase = raw_input("Please enter a sentence of 3 or 4 words: ")
```
^ That is all I have. So say the user enters the phrase "hey how are you" I am supposed to find and print the first letter of every word so it would print "hhay"
I know how to index if it is a string that the programmer types but not when a user inputs the data. | 2015/02/01 | [
"https://Stackoverflow.com/questions/28260652",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4516441/"
] | This does everything that [Ming](https://stackoverflow.com/users/904117/ming) said in a single line.
You can very well understand this code if you read his explanation.
```
phrase = raw_input("Please enter a sentence of 3 or 4 words: ")
output = ''.join([x[0] for x in phrase.split()])
print output
```
Update related to comment (Considers only first 3 words):
```
output = ''.join([x[0] for x in phrase.split()])[:3]
```
Ignoring the last word (Total number of words doesn't matter)
```
output = ''.join([x[0] for x in phrase.split()])[:-1]
``` | Here are a rough outline of the steps you can take. Since this is an assignment, I will leave actually assembling them into a working program up to you.
1. `raw_input` will produce a string.
2. If you have two strings, one in `foo` and one in `bar`, then you can call [`string.split`](https://docs.python.org/2/library/stdtypes.html#str.split) as `foo.split(bar)`, and the result of that will be a list of strings resulting from splitting the contents of `foo` by the separator `bar`. For example, `'a b c'.split(' ') == ['a', 'b', 'c']`.
3. You can slice a string with brackets to retrieve particular characters from it, counting from zero in the leftmost position. For example, `'abcd'[0] == 'a'`.
4. If you have a string `foo` and a list of strings `bar`, then you can call [`string.join`](https://docs.python.org/2/library/string.html#string.join) as `foo.join(bar)` to produce a single string of the elements of `foo` glued together with `bar`. For example, `'x'.join(['a', 'b', 'c']) == 'axbxc'`.
5. You can `print` the constructed output.
This is of course only one of many approaches you could take. | 288 |
44,180,066 | I am asking if it's possible to create an attribute DictField with DictField in django restframework. If yes! Is it possible to populate it as a normal dictionary in python. I want to use it as a foreign key to store data. | 2017/05/25 | [
"https://Stackoverflow.com/questions/44180066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8002200/"
] | The best way that i know is to use [momentjs](https://momentjs.com/). I have used it with angular 1.x.x with no problems. It's pretty easy to use, check this out. You can add the following row:
```
nm.pick = moment(nm.pick).format('DD-MM-YYYY');
```
This should solve your problem, | For `type="date"` binding
```js
var app = angular.module("MyApp", []).controller("MyCtrl", function($scope, $filter) {
$scope.nm = {};
$scope.nm.pick = new Date($filter('date')(new Date(), "yyyy-MM-dd"));
});
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.6.4/angular.min.js"></script>
<body ng-app="MyApp" ng-controller="MyCtrl">
<input type="date" required ng-model="nm.pick" id="dpicker ">
</body>
```
Reading other answers, I'd advice the same to go with `moment.js` as it is an expert library when playing around with date/time and different timezone. | 290 |
1,183,420 | I am a .Net / SQL Server developer via my daytime job, and on the side I do some objective C development for the iPhone. I would like to develop a web service and since dreamhost supports mySql, python, ruby on rails and PHP5, I would like to create it using one of those languages. If you had no experience in either python, Ruby on Rails or PHP, which would you go with and why? The service basically just takes a request and talks to a MySql database.
Note: Was planning on using the SOAP protocol.. though I am open to suggestions since I have a clean slate with these languages. | 2009/07/26 | [
"https://Stackoverflow.com/questions/1183420",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/77393/"
] | Ruby-on-rails, Python and PHP would all be excellent choices for developing a web service in. All the languages are capable (with of course Ruby being the language that Ruby on Rails is written in), have strong frameworks if that is your fancy (Django being a good python example, and something like Drupal or CakePHP being good PHP examples) and can play nicely with MySql.
I'd say that it would depend mostly on your past experience and what you'd be the most comfortable with. Assuming that you're developing C# on .NET and have experience with Objective-C PHP may be a good choice because it is most certainly from the C family of languages. So the syntax might be more familiar and a bit easier to deal with.
I'm a PHP developer so I'll give you that slant and let more knowledgeable developers with the others give theirs as well. PHP is tightly integrated with Apache, which can make some of the more mundane tasks that you'd have to handle with the others a bit more trivial (though when working with a framework those are usually removed). The [PHP documentation](http://www.php.net/manual/en/) is second to none and is a great resource for getting up and going easily. It has decent speed and there are good caching mechanisms out there to get more performance out of it. I know that getting up and running with PHP on Dreamhost is trivial. I haven't done it in the other instances although it wouldn't surprise me if those were just as easy as well.
I'd suggest digging a bit more into the documentation and frameworks for each language to find out what suits you best. | I have developed in Python and PHP and my personal preference would be Python.
Django is a great, easy to understand, light-weight framework for Python. [Django Site](http://www.djangoproject.com/)
If you went the PHP route, I would recommend Kohana. [Kohana Site](http://www.kohanaphp.com/) | 291 |
49,496,096 | I'm learning python and I'm not sure why the output of the below code is only "False" and not many "false" if I created a loop and the list of dict have 5 elements.
I was expect an ouput like
"False"
"False"
"False"
"False"
```
"False"
movies = [{
"name": "Usual Suspects"
}, {
"name": "Hitman",
}, {
"name": "Dark Knight",
},{
"name": "The Choice",
}, {
"name": "Colonia",}
]
def peliMayor(p):
index= -1
for n in movies:
index= index + 1
if (movies[index]['name'] == p):
return print("True")
else:
return print("False")
peli = "Thriller"
peliMayor(peli)
``` | 2018/03/26 | [
"https://Stackoverflow.com/questions/49496096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5544653/"
] | Try using the Remote VSCode plugin as explained here: [Using Remote VSCode](https://spin.atomicobject.com/2017/12/18/remote-vscode-file-editing/)
This discussion is exactly about your problem: [VSCode 13643 issue Github](https://github.com/Microsoft/vscode/issues/13643)
EDIT: I have recently found a new VSCode plugin on Github: [vs-deploy](https://github.com/mkloubert/vs-deploy). It was designed to deploy files and folders remotely very quickly. It seems to be working and I haven't found any bugs so far. It works with FTP, SFTP (SSH) and many other protocols. | The [SSH.NET nuget Package](https://www.nuget.org/packages/SSH.NET) can be used quite nicly to copy files and folders.
Here is an example:
```
var host = "YourServerIpAddress";
var port = 22;
var user = "root"; // TODO: fix
var yourPathToAPrivateKeyFile = @"C:\Users\Bob\mykey"; // Use certificate for login
var authMethod = new PrivateKeyAuthenticationMethod(user, new PrivateKeyFile(yourPathToAPrivateKeyFile));
var connectionInfo = new ConnectionInfo(host, port, user, authMethod);
using (var client = new SftpClient(connectionInfo))
{
client.Connect();
if (client.IsConnected)
{
//TODO: Copy folders recursivly etc.
DirectoryInfo source = new DirectoryInfo(@"C:\your\probject\publish\path");
foreach (var file in source.GetFiles())
{
client.UploadFile(File.OpenRead(file.FullName), $"/home/yourUploadPath/{file.Name}", true);
}
}
}
```
When you create a upload console application using the code above your should be able to automatically trigger an upload by using postbuild events by adding a section to your Project.
```
<Target Name="PostBuild" AfterTargets="PostBuildEvent">
<Exec Command="path to execute your script or application" />
</Target>
```
If you prefer to do the same but more manual you can perform a
```
dotnet build --configuration Release
```
followed by a
```
dotnet publish ~/projects/app1/app1.csproj
```
and then use the code above to perform an upload. | 301 |
637,399 | I admit the linux network system is somewhat foreign to me, I know enough of it to configure routes manually and assign a static IP if necessary.
So quick question, in the ifconfig configuration files, is it possible to add a post connect hook to a python script then use a python script to reassign a hostname in /etc/hosts and spawn off a new process to do other things once the hostname has been updated.
This is a "fun" project for me to solve an annoyance in my daily routine. When I boot up my workstation in the morning, the DHCP service assigns it a IP at random. So I usually stop what I'm doing, lookup my new IP, type that IP into my laptop and get synergy running so I can share the two machines. I figure I lose 10-15 minutes a day doing this everyday of the week and I've never really messed with linux's networking system so it would ultimately pan out.
I already figured my python script would have to run as root, therefore I'd store it in /root or somewhere else that's safe. I found a similar question on stack overflow that pointed me in the direction of <http://www.secdev.org/projects/scapy/index.html> a raw packet toolset to work with ARP. Editing the host file is a snap... just wondering what possible side effects of trying to put this hook into a core service might cause. | 2009/03/12 | [
"https://Stackoverflow.com/questions/637399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9908/"
] | Just make sure Avahi / Bonjour's running, then type *hostname*.local (or also try *hostname*.localdomain) - it resolves using mDNS, so you don't have to care what your IP is or rigging /etc/hosts. | You could also use **arp-scan** (a Debian package of the name exists, not sure about other distributions) to scan your whole network. Have a script parse its output and you'll be all set. | 303 |
52,331,595 | I took a look at [this question](https://stackoverflow.com/questions/1098549/proper-way-to-use-kwargs-in-python) but it doesn't exactly answer my question.
As an example, I've taken a simple method to print my name.
```
def call_me_by_name(first_name):
print("Your name is {}".format(first_name))
```
Later on, I realized that optionally, I would also like to be able to print the middle name and last name. I made the following changes to accommodate that using \*\*kwargs fearing that in the future, I might be made to add more fields for the name itself (such as a 3rd, 4th, 5th name etc.)
I decided to use \*\*kwargs
```
def call_me_by_name(first_name,**kwargs):
middle_name = kwargs['middle_name'] if kwargs.get('middle_name') else ""
last_name = kwargs['last_name'] if kwargs.get('last_name') else ""
print("Your name is {} {} {}".format(first_name,middle_name,last_name))
```
My only concern here is that as I continue to implement support for more names, I end up writing one line of code for every single keyword argument that may or may not come my way. I'd like to find a solution that is as pythonic as possible. Is there a better way to achieve this ?
**EDIT 1**
I want to use keyword arguments since this is just an example program. The actual use case is to parse through a file. The keyword arguments as of now would support parsing a file from
1) A particular byte in the file.
2) A particular line number in the file.
Only one of these two conditions can be set at any given point in time (since it's not possible to read from a particular byte offset in the file and from a line number at the same time.) but there could be more such conditions in the future such as parse a file from the first occurrence of a character etc. There could be 10-20 different such conditions my method should support BUT only one of those conditions would ever be set at any time by the caller. I don't want to have 20-30 different IF conditions unless there's no other option. | 2018/09/14 | [
"https://Stackoverflow.com/questions/52331595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4020238/"
] | You have two separate questions with two separate pythonic ways of answering those questions.
1- Your first concern was that you don't want to keep adding new lines the more arguments you start supporting when formatting a string. The way to work around that is using a `defaultdict` so you're able to return an empty string when you don't provide a specific keyword argument and `str.format_map` that accepts a dict as a way to input keyword arguments to format. This way, you only have to update your string and what keyword arguments you want to print:
```
from collections import defaultdict
def call_me_by_name(**kwargs):
default_kwargs = defaultdict(str, kwargs)
print("Your name is {first_name} {second_name} {third_name}".format_map(default_kwargs))
```
2- If, on the other hand and answering your second question, you want to provide different behavior depending on the keyword arguments, like changing the way a string looks or providing different file lookup functionalities, without using if statements, you have to add different functions/methods and call them from this common function/method. Here are two ways of doing that:
OOP:
```
class FileLookup:
def parse(self, **kwargs):
return getattr(self, next(iter(kwargs)))(**kwargs)
def line_number(self, line_number):
print('parsing with a line number: {}'.format(line_number))
def byte_position(self, byte_position):
print('parsing with a byte position: {}'.format(byte_position))
fl = FileLookup()
fl.parse(byte_position=10)
fl.parse(line_number=10)
```
Module:
```
def line_number(line_number):
print('parsing with a line number: {}'.format(line_number))
def byte_position(byte_position):
print('parsing with a byte position: {}'.format(byte_position))
def parse(**kwargs):
return globals()[next(iter(kwargs))](**kwargs)
parse(byte_position=29)
parse(line_number=29)
``` | You can simplify it by:
```
middle_name = kwargs.get('middle_name', '')
``` | 308 |
12,830,838 | Hello I have what I hope is an easy problem to solve. I am attempting to read a csv file and write a portion into a list. I need to determine the index and the value in each row and then summarize.
so the row will have 32 values...each value is a classification (class 0, class 1, etc.) with a number associated with it. I need a pythonic solution to make this work.
```
import os,sys,csv
csvfile=sys.argv[1]
f=open(csvfile,'rt')
reader=csv.reader(f)
classes=[]
for row in reader:
classes.append(row[60:92])
f.close()
classes = [' ', '1234', '645', '9897'], [' ', '76541', ' ', '8888']
```
how would i extract the index values from each list to get a sum for each?
for example: 0=(' ', ' ') 1=('1234', '76541') 2= ('645', ' ') 3= ('9897', '8888')
then find the sum of each
```
class 0 = 0
class 1 = 77775
class 2 = 645
class3 = 18785
```
Any assistance would be greatly appreciated | 2012/10/11 | [
"https://Stackoverflow.com/questions/12830838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1736554/"
] | I find your use case a bit difficult to understand, but does this list comprehension give you some new ideas about how to solve your problem?
```
>>> classes = [' ', '1234', '645', '9897'], [' ', '76541', ' ', '8888']
>>> [sum(int(n) for n in x if n != ' ') for x in zip(*classes)]
[0, 77775, 645, 18785]
``` | ```
>>> classes = [[' ', '1234', '645', '9897'], [' ', '76541', ' ', '8888']]
>>> my_int = lambda s: int(s) if s.isdigit() else 0
>>> class_groups = dict(zip(range(32), zip(*classes)))
>>> class_groups[1]
('1234', '76541')
>>> class_sums = {}
>>> for class_ in class_groups:
... group_sum = sum(map(my_int, class_groups[class_]))
... class_sums[class_] = group_sum
...
>>> class_sums[1]
77775
>>> class_sums[3]
18785
>>>
``` | 318 |
3,484,976 | I'm trying to write a Python client for a a WSDL service. I'm using the [Suds](https://fedorahosted.org/suds/wiki/Documentation) library to handle the SOAP messages.
When I try to call the service, I get a Suds exception: `<rval />` not mapped to message part. If I set the `retxml` Suds option I get XML which looks OK to me.
Is the problem with the client code? Am I missing some flag which will allow Suds to correctly parse the XML? Alternatively, the problem could be with the server. Is the XML not structured correctly?
My code is a follows (method names changed):
```
c = Client(url)
p = c.factory.create('MyParam')
p.value = 100
c.service.run(p)
```
This results in a Suds exception:
```
File "/home/.../test.py", line 38, in test
res = self.client.service.run(p)
File "/usr/local/lib/python2.6/dist-packages/suds-0.3.9-py2.6.egg/suds/client.py", line 539, in __call__
return client.invoke(args, kwargs)
File "/usr/local/lib/python2.6/dist-packages/suds-0.3.9-py2.6.egg/suds/client.py", line 598, in invoke
result = self.send(msg)
File "/usr/local/lib/python2.6/dist-packages/suds-0.3.9-py2.6.egg/suds/client.py", line 627, in send
result = self.succeeded(binding, reply.message)
File "/usr/local/lib/python2.6/dist-packages/suds-0.3.9-py2.6.egg/suds/client.py", line 659, in succeeded
r, p = binding.get_reply(self.method, reply)
File "/usr/local/lib/python2.6/dist-packages/suds-0.3.9-py2.6.egg/suds/bindings/binding.py", line 151, in get_reply
result = self.replycomposite(rtypes, nodes)
File "/usr/local/lib/python2.6/dist-packages/suds-0.3.9- py2.6.egg/suds/bindings/binding.py", line 204, in replycomposite
raise Exception('<%s/> not mapped to message part' % tag)
Exception: <rval/> not mapped to message part
```
The returned XML (modified to remove customer identifiers)
```
<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
<S:Body>
<ns2:getResponse xmlns:ns2="http://api.xxx.xxx.com/api/">
<rval xmlns="http://xxx.xxx.xxx.com/api/">
<ns2:totalNumEntries>
2
</ns2:totalNumEntries>
<ns2:entries>
<ns2:id>
1
</ns2:id>
</ns2:entries>
<ns2:entries>
<ns2:id>
2
</ns2:id>
</ns2:entries>
</rval>
</ns2:getResponse>
</S:Body>
</S:Envelope>
``` | 2010/08/14 | [
"https://Stackoverflow.com/questions/3484976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4697/"
] | Possible dup of [What does suds mean by "<faultcode/> not mapped to message part"?](https://stackoverflow.com/questions/2963094/what-does-suds-mean-by-faultcode-not-mapped-to-message-part/18948575#18948575)
Here is my answer from that question:
I had a similar issue where the call was successful, and Suds crashed on parsing the response from the client. The workaround I used was to use the [Suds option to return raw XML](http://jortel.fedorapeople.org/suds/doc/suds.options.Options-class.html) and then use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) to parse the response.
Example:
```
client = Client(url)
client.set_options(retxml=True)
soapresp_raw_xml = client.service.submit_func(data)
soup = BeautifulStoneSoup(soapresp_raw_xml)
value_i_want = soup.find('ns:NewSRId')
``` | This exception actually means that the answer from SOAP-service contains tag `<rval>`, which doesn't exist in the WSDL-scheme of the service.
Keep in mind that the Suds library caches the WSDL-scheme, that is why the problem may occur if the WSDL-scheme was changed recently. Then the answers match the new scheme, but are verified by the suds-client with the old one. In this case `rm /tmp/suds/*` will help you. | 320 |
35,122,185 | I have a Profiles app that has a model called profile, i use that model to extend the django built in user model without subclassing it.
**models.py**
```
class BaseProfile(models.Model):
user = models.OneToOneField(settings.AUTH_USER_MODEL, related_name='owner',primary_key=True)
supervisor = models.ForeignKey(settings.AUTH_USER_MODEL, related_name='supervisor', null=True, blank=True)
@python_2_unicode_compatible
class Profile(BaseProfile):
def __str__(self):
return "{}'s profile". format(self.user)
```
**admin.py**
```
class UserProfileInline(admin.StackedInline):
model = Profile
class NewUserAdmin(NamedUserAdmin):
inlines = [UserProfileInline ]
admin.site.unregister(User)
admin.site.register(User, NewUserAdmin)
```
admin
**the error is**
```
<class 'profiles.admin.UserProfileInline'>: (admin.E202) 'profiles.Profile' has more than one ForeignKey to 'authtools.User'.
```
obviously i want to select a user to be a supervisor to another user. I think the relationship in the model is OK, the one that's complaining is admins.py file. Any idea ? | 2016/02/01 | [
"https://Stackoverflow.com/questions/35122185",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2031794/"
] | You need to use multiple inline admin.
When you have a model with multiple ForeignKeys to the same parent model, you'll need specify the `fk_name` attribute in your inline admin:
```
class UserProfileInline(admin.StackedInline):
model = Profile
fk_name = "user"
class SupervisorProfileInline(admin.StackedInline):
model = Profile
fk_name = "supervisor"
class NewUserAdmin(NamedUserAdmin):
inlines = [UserProfileInline, SupervisorProfileInline]
```
Django has some documentation on dealing with this: <https://docs.djangoproject.com/en/1.9/ref/contrib/admin/#working-with-a-model-with-two-or-more-foreign-keys-to-the-same-parent-model> | Here is an example that I have just tested to be working
```
class Task(models.Model):
owner = models.ForeignKey(User, related_name='task_owner')
assignee = models.ForeignKey(User, related_name='task_assigned_to')
```
In admin.py
```
class TaskInLine(admin.TabularInLine):
model = User
@admin.register(Task)
class MyModelAdmin(admin.ModelAdmin):
list_display = ['owner', 'assignee']
inlines = [TaskInLine]
``` | 321 |
15,351,081 | For example let's say I have a file called myscript.py
This file contains the following code.
```
foo(var):
return var
```
How would I call the function foo with argument var on command line.
I know that I can go to the directory myscript.py is placed in and type.
```
>>> python myscript.py
```
Which will run myscript.py. The only problem is myscript.py doesn't automatically call foo when it is run.
I have tried using
```
if __name__ == "__main__":
foo( )
```
Which does not work for me. For some reason when I do that nothing happens. I get no error message and nothing is called. | 2013/03/12 | [
"https://Stackoverflow.com/questions/15351081",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2158898/"
] | You don't get any output because you don't generate any. Try calling [`print`](http://docs.python.org/3/library/functions.html#print):
```
def foo(var):
print(var)
if __name__ == '__main__':
foo('Hello, world')
``` | You have to use the `sys` module to pass arguments from the command line.
You can do this:
```
import sys
def foo(var):
return var
if __name__ == '__main__':
# arguments check
if len(sys.argv) != 2:
print "USAGE: %s <value>" % sys.argv[0]
sys.exit(1)
# get the agument so as to use it to the function call
arg = sys.argv[1]
# call the function using this argument
val = foo(arg)
# print the returned value
print val
```
Then you can run your python script by this command:
`python myscript.py 3`
giving as argument e.g. the number 3 | 322 |
22,214,463 | The title is self explanatory. What is going on here? How can I get this not to happen? Do I really have to change all of my units (it's a physics problem) just so that I can get a big enough answer that python doesn't round 1-x to 1?
code:
```
import numpy as np
import math
vel=np.array([5e-30,5e-30,5e-30])
c=9.7156e-12
def mag(V):
return math.sqrt(V[0]**2+V[1]**2+V[2]**2)
gam=(1-(mag(vel)/c)**2)**(-1/2)
print(mag(vel))
print(mag(vel)**2)
print(mag(vel)**2/(c**2))
print(1-mag(vel)**2/(c**2))
print(gam)
```
output:
```
>>> (executing lines 1 to 17 of "<tmp 1>")
8.660254037844386e-30
7.499999999999998e-59
7.945514251743055e-37
1.0
1.0
>>>
``` | 2014/03/06 | [
"https://Stackoverflow.com/questions/22214463",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3347826/"
] | In python [decimal](http://docs.python.org/library/decimal.html#module-decimal) may work and maybe [mpmath](https://code.google.com/p/mpmath/).
as is discussed in this SO [article](https://stackoverflow.com/questions/11522933/python-floating-point-arbitrary-precision-available)
If you are willing to use Java (instead of python), you might be able to use BigDecimal, or [apfloat](http://www.apfloat.org/) or [JScience](http://jscience.org/).
8.66e-30 only uses 3 sigs, but to illustrate 1 minus that would require more than 30. Any more than 16 significant figures you will need to represent digits using something else, like very long strings. But it's difficult to do math with long strings. You could also perform binary computations on very long arrays of byte values. The byte values could be made to represent a very large integer value modified by a scale factor of your choice. So if you can support and integer larger than 1E60, then you can alternately scale the value so that you can represent 1E-60 with a maximum value of 1. You can probably do that with about 200 bits or 25 bytes, and with 400 bits, you should be able to precisely represent the entire range of 1E60 to 1E-60. There may already be utilities that can perform calculations of this type out there used by people that work in math or security as they may want to represent PI to a thousand places for instance, which you can't do with a double.
The other useful trick is to use scale factors. That is, in your original coordinate space you cannot do the subtraction because the digits will not be able to represent the values. But, if you make the assumption that if you are making small adjustments you do not simultaneously care about large adjustments, then you can perform a transform on the data. So for instance you subtract 1 from your numbers. Then you could represent 1-1E-60 as -1E-60. You could do as many operations very precisely in your transform space, but knowing full well that if you attempt to convert them back from your transform space they will be lost as irrelevant. This sort of tactic is useful when zooming in on a map. Making adjustments on the scale of micrometers in units of latitude and longitude for your single precision floating point DirectX calculations won't work. But you could temporarily change your scale while you are zoomed in so that the operations will work normally.
So complicated numbers can then be represented by a big number plus a second number that represents the small scale adjustment. So for instance, if you have 16 digits in a double, you can use the first number to represent the large portion of the value, like from 1 to 1E16, and the second double to represent the additional small portion. Except that using 16 digits might be flirting with errors from the double's ability to represent the big value accurately so you might use only 15 or 14 or so just to be safe.
```
1234567890.1234567890
```
becomes
```
1.234567890E9 + 1.23456789E-1.
```
and basically the bigger your precision the more terms your complex number gets. But while this sort of thing works pretty well when each term is more or less mathematically independent, in cases where you have to do lots of rigorous calculations that operate across the scales, doing the book-keeping between these values would likely be more of a pain than it would be worth. | I think you won't get the result you are expecting because you are dealing with computer math limits. The thing about this kind of calculations is that nobody can avoid this error, unless you make/find some models that has infinite (theoretically) decimals and you can operate with them. If that is too much for the problem you are trying to solve, maybe you just have to be careful and try to do whatever you need but trying to handle these errors in calculations.
There are a lot of bibliography out there with many different approaches to handle the errors in calculations that helps not to avoid but to minimize these errors.
Hope my answer helps and don't disappoint you.. | 323 |
73,007,506 | Hi I want to clean up my code where I am converting a list of items to integers whenever possible in the python programming language.
```
example_list = ["4", "string1", "9", "string2", "10", "string3"]
```
So my goal (which is probably very simple) is to convert all items in the list from integers to integers and keep the actual strings as strings. The desired output of the program should be:
```
example_list = [4, "string1", 9, "string2", 10, "string3"]
```
I am looking for a nice clean method as I am sure that it is possible. I am curious about what nice methods there are. | 2022/07/16 | [
"https://Stackoverflow.com/questions/73007506",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13840524/"
] | Perhaps add this somewhere:
```
<style>
div {
max-width: 306px !important;
}
</style>
``` | u can try and use max width set to 306px for that part | 324 |
61,412,481 | I am trying to access Google Sheet (read only mode) from Python (runs in GKE).
I am able to get application default creds, but getting scopes issue (as I am missing `https://www.googleapis.com/auth/spreadsheets.readonly` scope). See code below:
```
from googleapiclient.discovery import build
from oauth2client import client
creds=client.GoogleCredentials.get_application_default()
service = build('sheets', 'v4', credentials=creds)
sheet = service.spreadsheets()
sheet.values().get(spreadsheetId='XXXXXXXXXX', range='Sheet1!A:C').execute()
```
The output is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/oauth2client/_helpers.py", line 133, in positional_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/googleapiclient/http.py", line 840, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://sheets.googleapis.com/v4/spreadsheets/XXXXXX/values/Sheet1%21A%3AC?alt=json returned "Request had insufficient authentication scopes.">
```
I tried to read any documentation available, but al of the relevant scope related is using external service account JSON file.
Is there a way to use Application Default and add the required scope? | 2020/04/24 | [
"https://Stackoverflow.com/questions/61412481",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408628/"
] | You need to separate tick values and tick labels.
```
ax.set_xticks([]) # values
ax.set_xticklabels([]) # labels
``` | Just change it to:
```
self.ax.set_xticks([])
self.ax.set_yticks([])
```
The error says that the second parameter cannot be given positionally, meaning that you need to explicitly give the parameter name minor=False for the second parameter or remove the second parameter in your case. | 325 |
17,713,692 | I am trying to simplify an expression using z3py but am unable to find any documentation on what different tactics do. The best resource I have found is a [stack overflow question](https://stackoverflow.com/questions/16167088/z3-tactics-are-not-available-via-online-interface) that lists all the tactics by name.
Is someone able to link me to detailed documentation on the tactics available?
The on line python tutorials are not sufficient.
Or can someone recommend a better way to accomplish this.
An example of the problems is an expression such as:
`x < 5, x < 4, x < 3, x = 1` I would like this to simplify down to `x = 1`.
Using the tactic `unit-subsume-simplify` appears works for this example.
But when I try a more complicated example such as `x > 1, x < 5, x != 3, x != 4` I get `x > 1, x < 5, x ≠ 3, x ≠ 4` as the result. When I would like `x = 2`.
What is the best approach to achieve this type of simplification using z3py?
[My current solution](http://rise4fun.com/Z3Py/QLRG).
Thanks Matt | 2013/07/18 | [
"https://Stackoverflow.com/questions/17713692",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016669/"
] | What's going on is that you're returning right after the first line of the file doesn't match the id you're looking for. You have to do this:
```
def query(id):
for line in file:
table = {}
(table["ID"],table["name"],table["city"]) = line.split(";")
if id == int(table["ID"]):
file.close()
return table
# ID not found; close file and return empty dict
file.close()
return {}
``` | I followed approach as shown in code below to return a dictionary. Created a class and declared dictionary as global and created a function to add value corresponding to some keys in dictionary.
\*\*Note have used Python 2.7 so some minor modification might be required for Python 3+
```
class a:
global d
d={}
def get_config(self,x):
if x=='GENESYS':
d['host'] = 'host name'
d['port'] = '15222'
return d
```
Calling get\_config method using class instance in a separate python file:
```
from constant import a
class b:
a().get_config('GENESYS')
print a().get_config('GENESYS').get('host')
print a().get_config('GENESYS').get('port')
``` | 326 |
73,381,445 | I need to have bash shell commands run through python in order to be universal with pc and mac/linux. `./bin/production` doesn't work in powershell and putting 'bash' in front would give an error that it doesn't recognize 'docker' command
./bin/production contents:
```
#!/bin/bash
docker run --rm -it \
--volume ${PWD}/prime:/app \
$(docker build -q docker/prime) \
npm run build
```
This is the python script:
```
import subprocess
from python_on_whales import docker
cmd = docker.run('docker run --rm -it --volume ${PWD}/prime:/app $(docker build -q docker/prime) npm run build')
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
out, err = p.communicate()
print(out)
```
This is the error I get when running the python script:
python\_on\_whales.exceptions.NoSuchImage: The docker command executed was `C:\Program Files\Docker\Docker\resources\bin\docker.EXE image inspect docker run --rm -it --volume ${PWD}/prime:/app $(docker build -q docker/prime) npm run build`.
It returned with code 1
The content of stdout is '[]
'
The content of stderr is 'Error response from daemon: no such image: docker run --rm -it --volume ${PWD}/prime:/app $(docker build -q docker/prime) npm run build: invalid reference format: repository name must be lowercase
'
Running the command, `docker run --rm -it--volume ${PWD}/prime:/app $(docker build -q docker/prime) npm run build` in one long line in powershell works but we want a universal standard command for both pc and mac/linux | 2022/08/16 | [
"https://Stackoverflow.com/questions/73381445",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19779700/"
] | ### Modification points:
* In your showing script, it seems that `payload` is not used.
* When `getValue()` is used in a loop, the process cost becomes high. [Ref](https://gist.github.com/tanaikech/d102c9600ba12a162c667287d2f20fe4)
When these points are reflected in a sample script for achieving your goal, it becomes as follows.
### Sample script:
When your showing script is modified, how about the following modification?
```js
function testRun() {
var ss = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
var LastRow = ss.getLastRow();
var LastCol = ss.getLastColumn();
var [header, ...values] = ss.getRange(1, 1, LastRow, LastCol).getValues();
var arr = [];
for (var i = 0; i < LastRow - 1; i++) {
var temp = {};
for (var j = 0; j < LastCol; j++) {
temp[header[j]] = values[i][j];
}
arr.push(temp);
}
console.log(JSON.stringify(arr))
}
```
* When this script is run, `arr` is `[{"FirstName":"Kim","LastName":"hana","Phone":"010-1234-5648","Email":"abc@gmail.com"}]`.
* As another apporach, I thought that you might be able to also use the following sample script.
```
var ss = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
var [header, ...values] = ss.getRange(1, 1, ss.getLastRow(), ss.getLastColumn()).getValues();
var arr = values.map(r => header.reduce((o, h, j) => (o[h] = r[j], o), {}));
console.log(JSON.stringify(arr))
``` | Your code creates separate objects in each iteration. Create an empty object for every Person, then add the properties you need.
This should look something like this:
```js
for(var i=2; i<=LastRow; i++){
var payload = {}
for(var j=1; j<=LastCol; j++){
payload[arr[j-1]] = ss.getRange(i,j).getValue()};
}
}
``` | 332 |
6,499,839 | >
> **Possible Duplicate:**
>
> [Flatten (an irregular) list of lists in Python](https://stackoverflow.com/questions/2158395/flatten-an-irregular-list-of-lists-in-python)
>
>
>
I have the following list --
```
[1,[2,3],4,[5,[6,7]]]
```
And I need to make it flat --
```
[1,2,3,4,5,6,7]
```
To do this, I am currently using a `for` loop with `isinstance`, with the number of loops being `#nests - 1`.
What woud be the simplest way to make the nested list flat? Thank you.
A similar question which deals with making a flat list out of nested lists (only) can be found here: [Making a flat list out of list of lists in Python](https://stackoverflow.com/questions/952914/making-a-flat-list-out-of-list-of-lists-in-python). | 2011/06/27 | [
"https://Stackoverflow.com/questions/6499839",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/651174/"
] | Hauled from [webhelpers.misc.flatten](http://sluggo.scrapping.cc/python/WebHelpers/modules/misc.html?highlight=flatten#webhelpers.misc.flatten)
```
def flatten(iterable):
"""Recursively iterate lists and tuples.
"""
for elm in iterable:
if isinstance(elm, (list, tuple)):
for relm in flatten(elm):
yield relm
else:
yield elm
```
**EDIT:** The iterable test here is quite naive and can be improved by checking for the presence of `__iter__` or an instance of the `collections.Iterable` abstract base class.
**EDIT 2:** @ChristopheD is absolutely correct, this is a dupe so head over to the linked question. | Try this:
```
def flat(mlist):
result = []
for i in mlist:
if type(i) is list:
result.extend(flat(i))
else:
result.append(i)
return result
``` | 333 |
72,081,872 | Hi I trying to make RBM Model code using pytorch module but got a issue in visible layer to hidden layer. Here is the problem part code.
```py
h_bias = (self.h_bias.clone()).expand(10)
v = v.clone().expand(10)
p_h = F.sigmoid(
F.linear(v, self.W, bias=h_bias)
)
sample_h = self.sample_from_p(p_h)
return p_h, sample_h
```
and each parameters size is here.
```
h_bias v self.W
torch.Size([10]) torch.Size([10]) torch.Size([1, 10])
1 1 2
```
```
Traceback (most recent call last):
File "/Users/bahk_insung/Documents/Github/ecg-dbn/model.py", line 68, in <module>
v, v1 = rbm(sample_data)
File "/Users/bahk_insung/miniforge3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/bahk_insung/Documents/Github/ecg-dbn/RBM.py", line 54, in forward
pre_h1, h1 = self.v_to_h(v)
File "/Users/bahk_insung/Documents/Github/ecg-dbn/RBM.py", line 36, in v_to_h
F.linear(v, self.W, bias=h_bias)
File "/Users/bahk_insung/miniforge3/lib/python3.9/site-packages/torch/nn/functional.py", line 1849, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: output with shape [1] doesn't match the broadcast shape [10]
```
I think dimension and size are not matched thats reason why happened. But I can't get any solutions. Please help me guys. Thank you. | 2022/05/02 | [
"https://Stackoverflow.com/questions/72081872",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9713429/"
] | If you look at the pytorch functional.linear documentation it shows the weight parameter can be either 1D or 2D: "Weight: (out\_features, in\_features) or (in\_features)". Since your weight is 2D ([1, 10]) it indicates that you are trying to create an output of size "1" with an input size of "10". The linear transform does not know how to change your inputs of size 10 into an output of size 1. If your weight will always be [1, N] then you can use squeeze to change it to 1D like so:
```
F.linear(v, self.W.squeeze(), bias=h_bias)
```
This would create an output of size 10. | I solved with torch.repeat() function. As mandias said...
>
> you are trying to create an output of size "1" with an input size of "10". The linear transform does not know how to change your inputs of size 10 into an output of size 1.
>
>
>
That was a my problem. So I changed weight input like this.
```py
w = self.W.clone().repeat(10, 1)
```
Originally, self.W size is [1, 10]. After used repeat function then changed as [10, 10]. Input is 10 size, and Output is 10.
Tbh not sure this code is right thing but I HAVE TO run this code quickly... anyway thank you guys. | 334 |
49,259,985 | I am reading data from a python dictionary and trying to add more book elements in the below tree. Below is just an examplke, i need to copy an element with it's child(s) but replace the content, in this case i need to copy the book element but replace title and author.
```
<store>
<bookstore>
<book>
<title lang="en">IT book</title>
<author>Some IT Guy</author>
</book>
</bookstore>
</store>
```
I use this code:
```
root = et.parse('Template.xml').getroot()
bookstore = root.find('bookstore')
book = root.find('bookstore').find('book')
```
Then i run the loop through a dictionary and trying to add new book elements under the bookstore:
```
for bk in bks:
book.find('title').text = bk
bookstore.append(book)
```
The result is that book elements are added to the bookstore, however they all contain title from the last iteration of the loop. I know i am doing something wrong here, but i can't understand what. I tried:
```
book[0].append(book) and book[-1].append(book)
```
But it did not help. | 2018/03/13 | [
"https://Stackoverflow.com/questions/49259985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4977702/"
] | You changing the same object.
You need to actual copy the object with copy.deepcopy
Example:
```
import xml.etree.ElementTree as et
import copy
root = et.parse('Template.xml').getroot()
bookstore = root.find('bookstore')
book = root.find('bookstore').find('book')
bks = ["book_title_1", "book_title_2", "book_title_3"]
for bk in bks:
new_book = copy.deepcopy(book)
new_book.find('title').text = bk
bookstore.append(new_book)
print et.tostring(root)
``` | I'm guessing instead of `books.append(book)` you mean `bookstore.append(book)`.
Basically here you have a structure:
```
- store
- bookstore
- book
- book infos
```
with `book = root.find('bookstore').find('book')` you are actually getting a reference to the (only) one you already have, and in the loop you keep updating its title and re-appending it to the store (so basically you are only overwriting the title). What you must do is to create every time a new [`Element`](https://docs.python.org/2/library/xml.etree.elementtree.html#element-objects) (or clone it as [Chertkov Pavel](https://stackoverflow.com/a/49261244/1029516) suggested, but you must remember to overwrite *all* the fields, or you may end up inheriting the wrong author) and append it to the bookstore:
```
for bk in bks:
new_book = et.Element('book')
# create and append title
new_title = et.Element('title', attib={'lang':'eng'})
new_title.text = bk
new_book.append(new_title)
# add also author and any other info
# ...
# append to the bookstore
bookstore.append(new_book)
print et.tostring(root)
``` | 335 |
2,741,986 | I'm developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control.
just as python it uses white spaces (or tabs) for indentation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop that's inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on.
example:
```
while 1
while 1
break 2
'hello world'!! #will never reach this. "!!" outputs with a newline
end
'hello world again'!! #also will never reach this. again "!!" used for cout
end
#after break 2 it would jump right here
```
but since I don't have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best:
I would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then every time i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error.
now the problem is, whats the best way to see if the indentation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesn't use break)?
another method to achieve this?
also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file?
sorry for the long question any help is greatly appreciated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on Google and had no luck).
thanks in advance. | 2010/04/30 | [
"https://Stackoverflow.com/questions/2741986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/208153/"
] | I am currently implementing a programming language rather similar to this (including the multilevel break oddly enough). My solution was to have the tokenizer emit indent and dedent tokens based on indentation. Eg:
```
while 1: # colons help :)
print('foo')
break 1
```
becomes:
```
["while", "1", ":",
indent,
"print", "(", "'foo'", ")",
"break", "1",
dedent]
```
It makes the tokenizer's handling of '\n' somewhat complicated though. Also, i wrote the tokenizer and parser from scratch, so i'm not sure whether this is feasable in lex and yacc.
### Edit:
Semi-working pseudocode example:
```
level = 0
levels = []
for c = getc():
if c=='\n':
emit('\n')
n = 0
while (c=getc())==' ':
n += 1
if n > level:
emit(indent)
push(levels,n)
while n < level:
emit(dedent)
level = pop(levels)
if level < n:
error tokenize
# fall through
emit(c) #lazy example
``` | Very interesting exercise. Can't you use the `end` keyword to check when the scope ends?
On a different note, I have never seen a language that allows you to break out of several nested loops at once. There may be a good reason for that... | 336 |
53,882,241 | I've seen a few similar questions on SO regarding detecting changes to a dictionary and calling a function when the dictionary changes, such as:
* [How to trigger function on value change?](https://stackoverflow.com/questions/6190468/how-to-trigger-function-on-value-change)
* [python detect if any element in a dictionary changes](https://stackoverflow.com/questions/26189090/python-detect-if-any-element-in-a-dictionary-changes)
These examples use variations of the Observer pattern or overloading `__setitem__`, but all these examples don't detect changes on nested dictionary values.
For example, if I have:
```
my_dict = {'a': {'b': 1}}
my_dict['a']['b'] = 2
```
The assignment of `2` to the element `['a']['b']` will not be detected.
I'm wondering if there is an elegant way of detecting changes not only to the base elements of a dictionary but all the child elements of a nested dictionary as well. | 2018/12/21 | [
"https://Stackoverflow.com/questions/53882241",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/512965/"
] | Building on the answer given in [here](https://stackoverflow.com/questions/26189090/python-detect-if-any-element-in-a-dictionary-changes), just do the following:
```
class MyDict(dict):
def __setitem__(self, item, value):
print("You are changing the value of {} to {}!!".format(item, value))
super(MyDict, self).__setitem__(item, value)
```
and then:
```
my_dict = MyDict({'a': MyDict({'b': 1})})
my_dict['a']['b'] = 2
```
>
> You are changing the value of b to 2!!
>
>
>
```
my_dict['a'] = 5
```
>
> You are changing the value of a to 5!!
>
>
>
If you want to avoid manual calls to MyDict at each nesting level, one way of doing it, is to fully overload the *dict* class. For example:
```
class MyDict(dict):
def __init__(self,initialDict):
for k,v in initialDict.items():
if isinstance(v,dict):
initialDict[k] = MyDict(v)
super().__init__(initialDict)
def __setitem__(self, item, value):
if isinstance(value,dict):
_value = MyDict(value)
else:
_value = value
print("You are changing the value of {} to {}!!".format(item, _value))
super().__setitem__(item, _value)
```
You can then do the following:
```
# Simple initialization using a normal dict synthax
my_dict = MyDict({'a': {'b': 1}})
# update example
my_dict['c'] = {'d':{'e':4}}
```
>
> You are changing the value of c to {'d': {'e': 4}}!!
>
>
>
```
my_dict['a']['b'] = 2
my_dict['c']['d']['e'] = 6
```
>
> You are changing the value of b to 2!!
>
>
> You are changing the value of e to 6!!
>
>
> | Complete solution borrowing from the [this](https://stackoverflow.com/questions/26189090/python-detect-if-any-element-in-a-dictionary-changes) link(the second one given by OP)
```
class MyDict(dict):
def __setitem__(self, item, value):
print("You are changing the value of {key} to {value}!!".format(key=item, value=value))
super(MyDict, self).__setitem__(item, convert_to_MyDict_nested(value))
def convert_to_MyDict_nested(d):
if not(isinstance(d, dict)):
return d
for k, v in d.items():
if isinstance(v, dict):
d[k] = convert_to_MyDict_nested(v)
return MyDict(d)
```
So that if
```
d = {'a': {'b': 1}}
```
then,
```
d = convert_to_MyDict_nested(d)
d['a']['b'] = 2 # prints You are changing the value of b to 2!!
d['a']= 5 # prints You are changing the value of a to 5!!
```
Also, edited according to comment by OP. So,
```
d["c"] = {"e" : 7} # prints You are changing the value of c to {'e': 7}!!
d["c"]["e"] = 9 # prints You are changing the value of e to 9!!
``` | 337 |
8,662,887 | I've read that subprocess should be used but all the examples i've seen on it shows that it runs only command-line commands. I want my program to run a python command along with another command. The command i want to run is to send an email to a user while a user plays a game i created. i have to have the python commands run at the same time because without doing so nothing else in the game can happen before the email is finished sending so it lags the game. Please help and any input is appreciated. | 2011/12/29 | [
"https://Stackoverflow.com/questions/8662887",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1082837/"
] | It sounds like you are looking for threading, which is a relatively deep topic, but this should help you get started: <http://www.devshed.com/c/a/Python/Basic-Threading-in-Python/> | Threading is talked about in another answer, but you can get basically what you want by using subprocess's Popen command: <http://docs.python.org/library/subprocess.html#subprocess.Popen>
What you'll basically want is this (assuming proc is initialized somewhere in the game loop):
```
#...game code here...
args = [command_name_as_string, arg_1_to_command, arg_2_to_command, etc.]
proc = subprocess.Popen(args)
```
Then, you'll go back to your game loop. At some point in the game loop, you can put in something like this:
```
if proc:
proc.poll()
if proc.returncode:
#...do whatever you want with the process output here, which can
# be accessed with proc.stdin, proc.stderr, and so on...
proc = None
``` | 339 |
23,434,748 | I hate Recursive right now. Any way does any one know how to find square root using recursive solution on python. Or at least break it down to simple problems. All examples I found are linear only using one argument in the function. My function needs square Root(Number, low Guess, high Guess, Accuracy) I think the accuracy is supposed to be the base case but I just can't figure out the recursive part.
This is what I have tried:
```
L = 1
H = 3
def s(n,L,H):
if (((L + H)/2)**2) <= 0.05:
print(((L + H)/2))
else:
if (((L + H)/2)**2) > 2:
L = (L + H)/2
return s(n,L,H)
else:
H = (L + H)/2
return s(n,J,H)
``` | 2014/05/02 | [
"https://Stackoverflow.com/questions/23434748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3597291/"
] | Remove the timeout setting:
```
rm config/initializers/timeout.rb
```
Heroku times-out all requests at 30 seconds but the process will continue running in the background.
If you want to avoid that, re-add the line above but [put rack-timeout in your Gemfile](https://github.com/heroku/rack-timeout). | I would suggest trying the following:
```
heroku labs:enable user-env-compile
```
If this fails, you could always precompile your production assets, add them to your codebase and push them to heroku yourself.
```
RAILS_ENV=production rake assets:precompile
git add .
git commit -m 'serving up my precompiled assets'
git push origin master
git push origin heroku
``` | 340 |
28,551,263 | ```
import pygame
import time
pygame.mixer.init()
pygame.mixer.music.load('/home/bahara.mp3')
time.sleep(2)
pygame.mixer.music.play()
```
While compiling this code from terminal, no error is thrown, but I am unable to hear any music. But when executed line by line, the code is working fine.
Can you suggest a way to debug this? I am using Ubuntu 14.04 and python 2.7.6 | 2015/02/16 | [
"https://Stackoverflow.com/questions/28551263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4573517/"
] | Pygame requires an active display which you have not initialized. I suggest you try installing and using `mpg123` command line tool.
Install:
```
$ sudo apt-get install mpg123
```
Program:
```
import os, time
os.system('mpg123 /home/bahara.mp3')
``` | I'm going to post my earlier comment as an answer because I think it's worth trying if you want to retain pygame's ability to control the music player.
I suspect you're getting no sound because pygame is exiting as your script ends, whereas when you run line by line in a python terminal session pygame remains active. One way you could test this is by adding a loop after you start playing the file e.g. by checking the [get\_busy](http://www.pygame.org/docs/ref/music.html#pygame.mixer.music.get_busy) status:
```
import pygame
import time
pygame.mixer.init()
pygame.mixer.music.load('/home/bahara.mp3')
time.sleep(2)
pygame.mixer.music.play()
while pygame.mixer.music.get_busy():
print "Song is playing"
time.sleep(1)
print "Song has finished"
```
Assuming this works, you'll still be able to use pygame's controls to play, pause etc.
Also, please note, as Malik and I have both pointed out, MP3 support is quite limited in pygame so you may want to try converting your files to ogg. | 342 |
40,739,504 | My project consists of a python script(.py file) which has following dependencies :
1) numpy
2) scipy
3) sklearn
4) opencv (cv2)
5) dlib
6) torch
and many more ...
That is , the python script imports all of the above.
In order to run this script I need to manually install all of the dependencies by running 'pip install' or 'sudo apt-get install' commands on bash.
For dependencies like dlib , opencv and torch I need to curl the respective repositories build them using cmake and then install .(Here again i need to apt-get install cmake).
As a result I run a lot of commands just get the setup ready to run one python .py script.
Is there anyway I can build all these dependencies , package them , and just install everything using one command ?
PS :- I am a beginner in python . So please forgive if my question seems silly .
Thanks !!
Manasi | 2016/11/22 | [
"https://Stackoverflow.com/questions/40739504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6864242/"
] | I know that this Response may be a bit late. However, even if you can't benefit from this information now, perhaps someone else who may be looking for a similar answer will stumble onto this posting one day.
You can use [py2exe](https://pypi.org/project/py2exe/) or [pyinstaller](https://pypi.org/project/PyInstaller/) Modules, along w/ the [conda](https://pypi.org/project/conda/) Package Manager to Package and Compile an Executable. You will also need to install [pywin32](https://github.com/mhammond/pywin32/releases), if you're working on the Windows Platform.
If your project includes Non-Python Dependencies, you may also want to take a look at [NSIS](http://nsis.sourceforge.net/Download) (Nullsoft Scriptable Install System). If you plan on running Python Scripts during the Unpacking/Installation process, the NSIS Website also has [NsPython Plugins](http://nsis.sourceforge.net/NsPython_plug-in) available, for that purpose.
I hope this helps to get you started! | In case of only python dependencies, use [virtualenv](http://docs.python-guide.org/en/latest/dev/virtualenvs/).
In case of others, write a shell script which has all the installation commands. | 343 |
15,161,843 | i noticed some seemingly strange behaviour when trying to import a python module named rmod2 in different ways. if i start python from the directory where the *rmod2.py* file is located, it works fine. however, if i move the file to another folder where other modules are locate, it doesn't work as expected anymore.
the module/package folder is */usr/lib/pymodules/python2.7* and it is also contained in the *sys.path*. so i've created the folder */usr/lib/pymodules/python2.7/rmod2* and put an empty *\_\_init\_\_.py* and the *rmod2.py* in there. if i don't have the *\_\_init\_\_.py* i get:
```
>>> import rmod2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named rmod2
```
with the *\_\_init\_\_.py* file, the import seems to work, but the package is empty:
```
>>> import rmod2
>>> dir()
['__builtins__', '__doc__', '__name__', '__package__', 'rmod2']
>>> dir(rmod2)
['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__']
>>> rmod2.__path__
['/usr/lib/pymodules/python2.7/rmod2']
>>> rmod2.__file__
'/usr/lib/pymodules/python2.7/rmod2/__init__.py'
```
can someone tell me what's going on, and how to fix it to actually load the module contents when importing? | 2013/03/01 | [
"https://Stackoverflow.com/questions/15161843",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2124120/"
] | APK will be generated under \bin\ folder once you run your app for the first time.
Connect your Android device to dev machine via USB cable (assuming you got Android SDK etc installed), right click on Android project and do Run as->Android app.
The app will be installed and started on Android device. APK will be generated in \bin folder.
In general, you don't need the device, same thing can be done with emulator. | You can also do an export on the android application project. This is what you would do if you are looking at doing some key signing. This is the way you would want to export it if you are uploading to Google play or an enterprise app store.
Keep in mind, Worklight doesn't build your .ipa, .apk, or etc. It builds you compile ready resources. It will build you the folder structure and the project layers needed to build in the corresponding Native Environment (for apple, you would export to xcode, build it, and run). | 344 |
44,204,937 | I am trying to change the behavior of python's `int` class, but I'm not sure if it can be done using pure python. Here is what I tried so far:
```
import builtins
class int_new(builtins.int):
def __eq__(self, other):
return True
int = int_new
print(5 == 6) # the result is False, but I'm anticipating True
``` | 2017/05/26 | [
"https://Stackoverflow.com/questions/44204937",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7416363/"
] | You should replace last line with:
```
print(int(5) == int(6))
```
to force/ask Python to use your new class for integer numbers. | A year later I finally learned what I was wondering. When I was learning Python, I was not exposed to the idea of what a primitive type is. After learning C++ I realized that I was actually wondering if it is possible to replace primitive type with other custom types. The answer is obviously no. | 345 |
38,909,362 | I'm working with International Phonetic Alphabet (IPA) symbols in my Python program, a rather strange set of characters whose UTF-8 codes can range anywhere from 1 to 3 bytes long. [This thread](https://stackoverflow.com/questions/7291120/python-and-unicode-code-point-extraction) from several years ago basically asked the reverse question and it seems that `ord(character)` can retrieve a decimal number that I could convert to hex and thereafter to a code point, but the input for `ord()` seems to be limited to one byte. If I try `ord()` on any non-ASCII character, like `ɨ` for example, it outputs:
```
TypeError: ord() expected a character, but a string of length 2 found
```
With that no longer an option, is there any way in Python 2.7 to find the Unicode code point of a given character? (And does that character then have to be a `unicode` type?) I don't mean by just manually looking it up on a Unicode table, either. | 2016/08/12 | [
"https://Stackoverflow.com/questions/38909362",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6707199/"
] | >
> With that no longer an option, is there any way in Python 2.7 to find the Unicode code point of a given character? (And does that character then have to be a unicode type?) I don't mean by just manually looking it up on a Unicode table, either.
>
>
>
You can only find the unicode code point of a unicode object. To convert your byte string to a unicode object, decode it using `mystr.decode(encoding)`, where `encoding` is the encoding of your string. (You know the encoding of your string, right? It's probably UTF-8. :-) Then you can use `ord` according to the instructions you already found.
```
>>> ord(b"ɨ".decode('utf-8'))
616
```
As an aside, from your question it sounds like you're working with the strings in their UTF-8 encoded bytes form. That's probably going to be a pain. You should decode the strings to unicode objects as soon as you get them, and only encode them if you need to output them somewhere. | ```
>>> u'ɨ'
u'\u0268'
>>> u'i'
u'i'
>>> 'ɨ'.decode('utf-8')
u'\u0268'
``` | 346 |
61,902,162 | I am working with Python version 2.7 and boto3 , but cannot import boto3 library .
My Python path is
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
```
When I look under
```
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
```
I see boto3 installed . But I keep getting this error
```
import boto3
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto3/__init__.py", line 16, in <module>
from boto3.session import Session
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto3/session.py", line 14, in <module>
import copy
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 60, in <module>
from org.python.core import PyStringMap
File "/Users/user/git_repos/aws-boto3/org.py", line 7, in <module>
client = boto3.client('organizations')
AttributeError: 'module' object has no attribute 'client'
```
Python version: 2.7
boto3 Version: 1.13.13
botocore Version: 1.16.13
What am I missing?
Here is the code
```
import boto3
print('hello')
```
Note that from a python commandline I can import boto3 this fails when I run `python hello.py` | 2020/05/19 | [
"https://Stackoverflow.com/questions/61902162",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3987161/"
] | You may be more familiar and comfortable with the `map` function from its common use in `Iterator`s but using `map` to work with `Result`s and `Option`s is also considered idiomatic in Rust. If you'd like to make your code more concise you can use [`map_or`](https://doc.rust-lang.org/std/result/enum.Result.html#method.map_or) like so:
```rust
let is_dir = entry.file_type().map_or(false, |t| t.is_dir());
``` | Alternatively, if you find the `map` unclear, you could use an `if` or `match` to be more explicit (and verbose):
```
let is_dir = if let Ok(file_type) = entry.file_type() {
file_type.is_dir()
} else {
false
};
```
or
```
let is_dir = match entry.file_type() {
Ok(file_type) => file_type.is_dir(),
_ => false,
};
```
Not necessarily better or worse, but an option available to you :) | 349 |
72,289,828 | I have source of truth stored in yaml file sot-inventory.yaml
```
- Hostname: NY-SW1
IP mgmt address: 10.1.1.1
OS: IOS
Operational status: Install
Role of work: Switch
Site: New_York
- Hostname: MI-SW1
IP mgmt address: 11.1.1.1
OS: NX-OS
Operational status: Install
Role of work: Switch
Site: Maiami
- Hostname: PA-SW1
IP mgmt address: 12.1.1.1
OS: Arista
Operational status: Install
Role of work: Witch
Site: Paris
```
I would like to get yaml ansible hosts inventory from file above with python script like this:
hosts.yaml
```
---
new_york:
hosts:
ny-sw1:
ansible_host: 10.1.1.1
os: ios
'role of work': switch
maiami:
hosts:
mi-sw1:
ansible_host: 11.1.1.1
os: nxos
'role of work': switch
paris:
hosts:
pa-sw1:
ansible_host: 12.1.1.1
os: arista
'role of work': switch
```
Could someone give an advice - which python structure or sample of scrypt may help to automate this staff? | 2022/05/18 | [
"https://Stackoverflow.com/questions/72289828",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19145073/"
] | The **X.509 Client Certificate** option which is part of the docker plugin, has recently changed its name as it used to be named **Docker Certificate Directory** (the behavior itself has not changed), therefore is it is tricky to find it in the `withCredentials` [Documentation](https://www.jenkins.io/doc/pipeline/steps/credentials-binding/).
The option you are looking for is called `dockerCert` (named after the old option) and it includes two parameter inputs `variable` and `credentialsId`:
>
> *dockerCert*
>
> **variable**
> Name of an environment variable to be set during the build.
>
> Its value will be the absolute path of the directory where the {ca,cert,key}.pem files will be created.
> You probably want to call this variable DOCKER\_CERT\_PATH, which will be understood by the docker client binary.
>
> *Type: String*
>
> **credentialsId**
> Credentials of an appropriate type to be set to the variable.
>
> *Type: String*
>
>
>
Pipeline usage example:
```groovy
withCredentials([dockerCert(credentialsId: 'myClientCert', variable: 'DOCKER_CERT_PATH')]) {
// code that uses the certificate files
}
``` | On my Jenkins, it's
```
withCredentials([certificate(aliasVariable: 'ALIAS_VAR',
credentialsId: 'myClientCert',
keystoreVariable: 'KEYSTORE_VAR',
passwordVariable: 'PASSWORD_VAR')]) {
...
}
```
Hint: If you add `/pipeline-syntax/` to your Jenkins URL, it will take you to a snippet generator that will generate some snippets for you based on your input. That's what I used to generate the above snippet. | 350 |
57,218,302 | I have a an excel sheet with one column, the header is Name and the row below it says Jerry. All i want to do is append to this using python with the header: Age then a row below that saying e.g. 14.
How do i do this?
```
with open('simpleexcel.csv', 'r') as f_input: # Declared variable f_input to open and read the input file
input_reader = csv.reader(f_input) # this will iterate ober lines from input file
with open('Outputfile.csv', "w", newline='') as f_output: # opens a file for qwriting in this case outputfile
line_writer = csv.writer(f_output) #If csvfile is a file object, it should be opened with newline=''
for line in input_reader: #prints every row
line_writer.writerow(line+['14'])
```
instead i get 14 and 14 i do not know how i get another header
What i have to start with is
```
Name
Jerry
```
what i would like is:
```
Name Age
Jerry 14
```
Instead i get:
```
Name 14
Jerry 14
```
how can i ammend my above code? | 2019/07/26 | [
"https://Stackoverflow.com/questions/57218302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6714667/"
] | Use `next(input_reader)` to get the header and then append the new column name and write it back to csv.
**Ex:**
```
with open('simpleexcel.csv', 'r') as f_input: # Declared variable f_input to open and read the input file
input_reader = csv.reader(f_input) # this will iterate ober lines from input file
with open('Outputfile.csv', "w", newline='') as f_output: # opens a file for qwriting in this case outputfile
line_writer = csv.writer(f_output) #If csvfile is a file object, it should be opened with newline=''
line_writer.writerow(next(input_reader) + ["Age"])) #Write Header
for line in input_reader: #prints every row
line_writer.writerow(line+['14'])
``` | I don't know what you are trying to accomplish here but for your sample case this can be used.
```
import csv
with open('simpleexcel.csv', 'r') as f_input:
input_reader = list(csv.reader(f_input))
input_reader[0].append('Age')
for row in input_reader[1:]:
row.append(14)
with open('Outputfile.csv', "w", newline='') as f_output:
csv.writer(f_output).writerows(input_reader)
```
**Input:**
```
Name
Jerry
```
**Output:**
```
Name,Age
Jerry,14
``` | 351 |
66,948,944 | I managed to find python code that defines a LinkedList class and all the defacto methods in it but quite can't figure out what each line of code does...Can someone comment on it explaining what each line does so i can grasp a better understanding of LinkedLists in python?
```
class Node:#what is the significance of a node
def __init__(self, data, next):#why these parameters
self.data = data
self.next = next
class LinkedList:
def __init__(self):#what is a head
self.head = None
def add_at_front(self, data):
self.head = Node(data, self.head)
def add_at_end(self, data):
if not self.head:#what is it checking
self.head = Node(data, None)
return#what is it returning
curr = self.head
while curr.next:
curr = curr.next
curr.next = Node(data, None)
def get_last_node(self):
n = self.head
while(n.next != None):
n = n.next
return n.data
def is_empty(self):#i understand this method
return self.head == None
def print_list(self):#i also undertsnad this one
n = self.head
while n != None:#what is this loop doing
print(n.data, end = " => ")
n = n.next
print()
s = LinkedList()
s.add_at_front(5)
s.add_at_end(8)
s.add_at_front(9)
s.print_list()
print(s.get_last_node())
``` | 2021/04/05 | [
"https://Stackoverflow.com/questions/66948944",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Usually people use macros to append `__LINE__` to the declaration to allow for multiple declarations in one scope block.
Prior to C++20 this was impossible without macros. C++20 and later, with a little work, can use [`std::source_location`](https://en.cppreference.com/w/cpp/utility/source_location).
Jason Turner has a video on it in his C++ weekly video series [here](https://www.youtube.com/watch?v=TAS85xmNDEc) | I was initially not sure to grasp the advantage of that macro, apart hiding the timer instance name (and cause possible conflicts).
But I think that the intent could be to have the possibility to do this:
```
#ifdef _DEBUG
#define SCOPED_TIMER(slot) ScopedTimer __scopedTimer( slot );
#else
#define SCOPED_TIMER(slot) ;
#endif
```
That would indeed save some keystrokes; otherwise, if the timing takes place also in release builds, I would simply use directly the macro definition; either way, I would get rid of the initial underscores in the object name (that are conventionally reserved for compiler implementers):
```
ScopedTimer scoped_timer( some_magic_slot_number );
``` | 352 |
7,456,630 | I have a python REST API server running on my laptop. I am trying to write a rest client in Android (using Eclipse ADT etc) to contact it using Apache's client (org.apache.http.client) libraries.
The code is really simple, and basically does the following -
```
HttpGet httpget = new HttpGet(new URI("http://10.0.2.2:8000/user?username=tim"));
HttpResponse response = httpclient.execute(httpget);
```
However at execute, it exceptions out with a time out exception. I cannot hit the URL even from the browser in the emulator.
Details of the exception
```
org.apache.http.conn.ConnectTimeoutException: Connect to /10.0.2.2:8000 timed out
```
However, I tried using the cREST client on Chrome on my laptop, and I am able to query the REST server fine. | 2011/09/17 | [
"https://Stackoverflow.com/questions/7456630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/856919/"
] | I was gonna post this code sample:
```
Process rsyncProc = Runtime.exec ("rsync");
OutputStreanm rsyncStdIn = rsyncProv.getOutputStream ();
rsyncStdIn.write ("password".getBytes ());
```
But [Vineet Reynolds](https://stackoverflow.com/users/3916/vineet-reynolds) was ahead of me.
As Vineet Reynolds pointed out using such approach will require an additional piece of code to detect when rsync requires a password. So using an external password file seems to be an easier way.
P.S.: There may be a problem related to the encoding, it can by solved by converting the string to a byte array using appropriate encoding as described [here](http://download.oracle.com/javase/6/docs/api/java/lang/String.html#getBytes(java.lang.String)).
P.P.S.: It seems that I can't yet comment an answer, so I had to post a new one. | You can write to the output stream of the `Process`, to pass in any inputs. However, this will require you to have knowledge of `rsync`'s behavior, for you must write the password to the outputstream only when the password prompt is detected (by reading the input stream of the `Process`).
You may however, create a non-world readable password file, and pass the location of this password file using the `--password-file` option when you launch the `rsync` process from Java. | 353 |
10,971,649 | >
> **Possible Duplicate:**
>
> [How to read specific characters from lines in a text file using python?](https://stackoverflow.com/questions/10968973/how-to-read-specific-characters-from-lines-in-a-text-file-using-python)
>
>
>
I have a .txt file with lines looking like this
>
> Water 16:-30.4674 1:-30.4759 17:-30.5373 7:-30.6892 8:-31.128
> 13:-31.393 2:-31.4036 9:-32.0214 5:-32.4387 12:-32.6972 14:-32.8345
> 4:-33.1583 3:-34.1308 15:-34.9566 11:-38.799 10:-51.471 6:-211.086
>
>
> Water 13:-33.3397 9:-33.511 12:-33.6573 17:-33.7629 5:-33.9539
> 3:-34.1326 7:-34.3554 15:-34.7484 8:-35.0615 2:-35.4279 11:-37.0607
> 16:-37.2666 1:-38.4928 14:-41.2152 4:-43.3593 10:-80.4689 6:-208.802
>
>
> Yawn 13:-36.4616 9:-37.1025 15:-37.2519 17:-38.8885 8:-39.1585
> 14:-39.8553 2:-40.2131 12:-41.2615 1:-41.6317 7:-41.8205 3:-41.9883
> 11:-43.8492 16:-46.8158 5:-49.8107 4:-52.5595 10:-70.4841 6:-220.906
>
>
>
What i need to do is store the numbers that are before '`:`' in an array.
what is the iterative way or the easyest way to do it?
```
f=open('path','r')
lines=f.readlines()
for line in lines:
...
```
and from here on i do not know the spliting and storing procedure... please help. | 2012/06/10 | [
"https://Stackoverflow.com/questions/10971649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1447337/"
] | If the format is always the same you could do this for each line:
```
items = line.split()[1:]
items = [item.split(':')[0] for item in items]
```
And then if you want them as integers:
```
items = map(int, items)
```
As for storing them, create a list before iterating over each line `rows = []` and then you can add the items like this:
```
rows.append(items)
```
So all together it would look something like this:
```
f = open('path','r')
lines = f.readlines()
rows = []
for line in lines:
items = line.split()[1:]
items = [item.split(':')[0] for item in items]
items = map(int, items)
rows.append(items)
f.close()
print rows
``` | Use split method and append it to your array:
```
myArray=line.split(':-')
``` | 358 |
14,574,595 | What I'm trying to do is make a gaussian function graph. then pick random numbers anywhere in a space say y=[0,1] (because its normalized) & x=[0,200]. Then, I want it to ignore all values above the curve and only keep the values underneath it.
```
import numpy
import random
import math
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
from math import sqrt
from numpy import zeros
from numpy import numarray
variance = input("Input variance of the star:")
mean = input("Input mean of the star:")
x=numpy.linspace(0,200,1000)
sigma = sqrt(variance)
z = max(mlab.normpdf(x,mean,sigma))
foo = (mlab.normpdf(x,mean,sigma))/z
plt.plot(x,foo)
zing = random.random()
random = random.uniform(0,200)
import random
def method2(size):
ret = set()
while len(ret) < size:
ret.add((random.random(), random.uniform(0,200)))
return ret
size = input("Input number of simulations:")
foos = set(foo)
xx = set(x)
method = method2(size)
def undercurve(xx,foos,method):
Upper = numpy.where(foos<(method))
Lower = numpy.where(foos[Upper]>(method[Upper]))
return (xx[Upper])[Lower],(foos[Upper])[Lower]
```
When I try to print undercurve, I get an error:
```
TypeError: 'set' object has no attribute '__getitem__'
```
and I have no idea how to fix it.
As you can all see, I'm quite new at python and programming in general, but any help is appreciated and if there are any questions I'll do my best to answer them. | 2013/01/29 | [
"https://Stackoverflow.com/questions/14574595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1884319/"
] | The immediate cause of the error you're seeing is presumably this line (which should be identified by the full traceback -- it's generally quite helpful to post that):
```
Lower = numpy.where(foos[Upper]>(method[Upper]))
```
because the confusingly-named variable `method` is actually a `set`, as returned by your function `method2`. Actually, on second thought, `foos` is also a `set`, so it's probably failing on that first. Sets don't support indexing with something like `the_set[index]`; that's what the complaint about `__getitem__` means.
I'm not entirely sure what all the parts of your code are intended to do; variable names like "foos" don't really help like that. So here's how I might do what you're trying to do:
```
# generate sample points
num_pts = 500
sample_xs = np.random.uniform(0, 200, size=num_pts)
sample_ys = np.random.uniform(0, 1, size=num_pts)
# define distribution
mean = 50
sigma = 10
# figure out "normalized" pdf vals at sample points
max_pdf = mlab.normpdf(mean, mean, sigma)
sample_pdf_vals = mlab.normpdf(sample_xs, mean, sigma) / max_pdf
# which ones are under the curve?
under_curve = sample_ys < sample_pdf_vals
# get pdf vals to plot
x = np.linspace(0, 200, 1000)
pdf_vals = mlab.normpdf(x, mean, sigma) / max_pdf
# plot the samples and the curve
colors = np.array(['cyan' if b else 'red' for b in under_curve])
scatter(sample_xs, sample_ys, c=colors)
plot(x, pdf_vals)
```
![](https://i.imgur.com/DNDz6Kk.png)
Of course, you should also realize that if you only want the points under the curve, this is equivalent to (but much less efficient than) just sampling from the normal distribution and then randomly selecting a `y` for each sample uniformly from 0 to the pdf value there:
```
sample_xs = np.random.normal(mean, sigma, size=num_pts)
max_pdf = mlab.normpdf(mean, mean, sigma)
sample_pdf_vals = mlab.normpdf(sample_xs, mean, sigma) / max_pdf
sample_ys = np.array([np.random.uniform(0, pdf_val) for pdf_val in sample_pdf_vals])
``` | It's hard to read your code.. Anyway, you can't access a set using `[]`, that is, `foos[Upper]`, `method[Upper]`, etc are all illegal. I don't see why you convert `foo`, `x` into set. In addition, for a point produced by `method2`, say (x0, y0), it is very likely that x0 is not present in `x`.
I'm not familiar with numpy, but this is what I'll do for the purpose you specified:
```
def undercurve(size):
result = []
for i in xrange(size):
x = random()
y = random()
if y < scipy.stats.norm(0, 200).pdf(x): # here's the 'undercurve'
result.append((x, y))
return results
``` | 361 |
64,155,517 | I am trying to create a CSS grid, but it gets scattered when using width.
I want 3 posts on a row. I believe the problem might be with my border box. Only the desktop view is affected, mobile view looks perfectly normal.
I am using `width: 33.333%` to achieve the grid.
What is wrong with the CSS code?
```css
/*-----------------------------------------------------------------------*/
/* 1. Common Style */
/*-----------------------------------------------------------------------*/
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
font: 400 14px / 1.8 "Whitney SSm A", "Whitney SSm B", "Helvetica Neue", "Helvetica", "Arial", sans-serif;
background: #FFFFFF;
color: #000;
padding-bottom: 10%;
}
@media screen and (min-width: 801px) and (max-width: 2000px) {
a {
color: #FFFFFF;
text-decoration: none;
}
.list-item-header {
letter-spacing: 0.15em;
text-transform: uppercase;
font-size: 11px;
font-weight: 400;
margin: 0 0 4px;
color: #555A72;
}
.list-item {
box-sizing: border-box;
padding-top: 50px;
padding-bottom: 50px;
text-align: left;
vertical-align: top;
position: relative;
z-index: 1;
width: 33.333%;
}
.list-item-body {
font-size: 14px;
font-weight: 300;
margin: 10px 0 0;
color: rgba(85, 90, 114, 0.9);
overflow: hidden;
margin-top: 10px;
margin-bottom: 20px;
}
.list-item-link {
font-weight: 300;
font-size: 13px;
display: inline-block;
position: relative;
margin-top: 5px;
color: blue;
}
.greed {
display: block;
padding: 60px;
}
li {
list-style: none;
width: 33.33%;
float: left;
padding: 30px;
}
.cont {
margin-left: auto;
margin-right: auto;
padding: auto;
}
.contt {
box-sizing: border-box;
max-width: 1064px;
}
l::after {
content: "";
position: absolute;
top: 0;
left: 35px;
right: 0;
border-top: solid 1px rgba(35, 31, 32, 0.25);
}
}
@media screen and (min-width: 400px) and (max-width: 800px) {
a {
color: #FFFFFF;
text-decoration: none;
}
.list-item-header {
letter-spacing: 0.15em;
text-transform: uppercase;
font-size: 11px;
font-weight: 400;
margin: 0 0 4px;
color: #555A72;
}
.list-item {
box-sizing: border-box;
padding-top: 50px;
padding-bottom: 50px;
text-align: left;
vertical-align: top;
position: relative;
z-index: 1;
width: 33.333%;
}
.list-item-body {
font-size: 14px;
font-weight: 300;
margin: 10px 0 0;
color: rgba(85, 90, 114, 0.9);
overflow: hidden;
margin-top: 10px;
margin-bottom: 20px;
}
.list-item-link {
font-weight: 300;
font-size: 13px;
display: inline-block;
position: relative;
margin-top: 5px;
color: blue;
}
.greed {
display: block;
padding: 60px;
}
li {
list-style: none;
float: left;
padding: 10px;
}
.cont {
margin-left: auto;
margin-right: auto;
padding: auto;
}
.contt {
box-sizing: border-box;
max-width: 1064px;
}
l::after {
content: "";
position: absolute;
top: 0;
left: 35px;
right: 0;
border-top: solid 1px rgba(35, 31, 32, 0.25);
}
}
```
```html
<!doctype html>
<html xmlns="http://www.w3.org/1999/html">
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="icon" href="img/favicon.png" type="image/png">
<title>Blog posts</title>
<!-- Bootstrap CSS -->
<link rel="stylesheet" href="static/css/blog.css">
</head>
<body>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/django-database-relationship">
<h3 class="list-item-header">Django database relationship</h3>
<p class="list-item-body">
ForeignKey is only one-to-one if you specify ForeignKey(Dude, unique=True), so with the above code you will get a Dude with multiple PhoneNumbers. <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/database-models-for-tables-reservation-and-customer">
<h3 class="list-item-header">Database Models for tables, reservation and customer</h3>
<p class="list-item-body">
Class Customer(models.Model): email = models.EmailField() # And whatever other custom fields here; maybe make a ForeignKey link to User? <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/python-package-basics">
<h3 class="list-item-header">Python package basics</h3>
<p class="list-item-body">
https://dzone.com/articles/executable-package-pip-install <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/flask-sitemap">
<h3 class="list-item-header">Flask sitemap</h3>
<p class="list-item-body">
Sitemap route @app . route ( '/sitemap.xml' , methods =[ 'GET' ]) def sitemap (): try : """Generate sitemap.xml. <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/git-commands">
<h3 class="list-item-header">Git commands</h3>
<p class="list-item-body">
The following are git commands thm <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/creating-a-cicd-pipeline-with-jenkins-and-also-eks-clusters-through-aws-cloudformation-and-deploys-a">
<h3 class="list-item-header">Creating a CI/CD Pipeline with Jenkins and also EKS Clusters through AWS CloudFormation and deploys a Nginx image</h3>
<p class="list-item-body">
GitHub repo notes. Creating a CI/CD Pipeline with Jenkins and also EKS Clusters through AWS CloudFormation and deploys a Nginx image. <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/navigation-menu-css-from-codepen">
<h3 class="list-item-header">Navigation menu CSS from codepen</h3>
<p class="list-item-body">
Navigation menu https://codepen.io/kirstenhumphreys/pen/vgaKmG Nav Bar https://codepen.io/MilanMilosev/pen/GJbGJq <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/creating-virtual-environments-for-python">
<h3 class="list-item-header">Creating virtual environments for python</h3>
<p class="list-item-body">
Steps in creating a virtual environment for a python project <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/push-empty-git-to-check-ci">
<h3 class="list-item-header">Push empty git to check ci</h3>
<p class="list-item-body">
Sometimes, you need to push a commit to Git purely to check if some CI thing is working. The allow-empty flag lets you push a <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/webscraping-with-python-for-data-collection">
<h3 class="list-item-header">Webscraping with Python for data collection</h3>
<p class="list-item-body">
What is webscraping <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
</body>
</html>
```
[![enter image description here](https://i.stack.imgur.com/Zna8r.png)](https://i.stack.imgur.com/Zna8r.png) | 2020/10/01 | [
"https://Stackoverflow.com/questions/64155517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6499765/"
] | You are better off using `flex-box` or `grid` for this. There are a few things with your code that needed to be changed:
1. You have `float` and `width` set on your inner `li` item. That doesn't work when it's a child element, so, the `li` was floating in relation to its parent `ul`.
2. You can move the padding on the `ul.greed` to the `.cont` element instead.
3. I wrapped all your code in a `wrapper` element that has `display: flex` so the `.cont` elements become flex children.
4. I adjusted the media query to make it a bit more readable.
5. You could simplify your HTML a ton as well, but I left it since it may be auto-generated.
```css
/*-----------------------------------------------------------------------*/
/* 1. Common Style */
/*-----------------------------------------------------------------------*/
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
font: 400 14px / 1.8 "Whitney SSm A", "Whitney SSm B", "Helvetica Neue", "Helvetica", "Arial", sans-serif;
background: #FFFFFF;
color: #000;
padding-bottom: 10%;
}
.wrapper {
display: flex;
flex-wrap: wrap;
}
.cont {
flex: 0 0 100%;
padding: 20px;
}
a {
color: #FFFFFF;
text-decoration: none;
}
.list-item-header {
letter-spacing: 0.15em;
text-transform: uppercase;
font-size: 11px;
font-weight: 400;
margin: 0 0 4px;
color: #555A72;
}
.list-item-body {
font-size: 14px;
font-weight: 300;
margin: 10px 0 0;
color: rgba(85, 90, 114, 0.9);
overflow: hidden;
margin-top: 10px;
margin-bottom: 20px;
}
.list-item-link {
font-weight: 300;
font-size: 13px;
display: inline-block;
position: relative;
margin-top: 5px;
color: blue;
}
.greed {
display: block;
}
li {
padding: 30px;
list-style: none;
position: relative;
}
li::after {
content: "";
position: absolute;
top: 0;
left: 35px;
right: 0;
border-top: solid 1px rgba(35, 31, 32, 0.25);
}
@media (min-width: 801px) {
.cont {
flex: 0 0 33.33333%;
}
}
```
```html
<!doctype html>
<html xmlns="http://www.w3.org/1999/html">
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="icon" href="img/favicon.png" type="image/png">
<title>Blog posts</title>
<!-- Bootstrap CSS -->
<link rel="stylesheet" href="static/css/blog.css">
</head>
<body>
<div class="wrapper">
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/django-database-relationship">
<h3 class="list-item-header">Django database relationship</h3>
<p class="list-item-body">
ForeignKey is only one-to-one if you specify ForeignKey(Dude, unique=True), so with the above code you will get a Dude with multiple PhoneNumbers. <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/database-models-for-tables-reservation-and-customer">
<h3 class="list-item-header">Database Models for tables, reservation and customer</h3>
<p class="list-item-body">
Class Customer(models.Model): email = models.EmailField() # And whatever other custom fields here; maybe make a ForeignKey link to User? <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/python-package-basics">
<h3 class="list-item-header">Python package basics</h3>
<p class="list-item-body">
https://dzone.com/articles/executable-package-pip-install <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/flask-sitemap">
<h3 class="list-item-header">Flask sitemap</h3>
<p class="list-item-body">
Sitemap route @app . route ( '/sitemap.xml' , methods =[ 'GET' ]) def sitemap (): try : """Generate sitemap.xml. <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/git-commands">
<h3 class="list-item-header">Git commands</h3>
<p class="list-item-body">
The following are git commands thm <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/creating-a-cicd-pipeline-with-jenkins-and-also-eks-clusters-through-aws-cloudformation-and-deploys-a">
<h3 class="list-item-header">Creating a CI/CD Pipeline with Jenkins and also EKS Clusters through AWS CloudFormation and deploys a Nginx image</h3>
<p class="list-item-body">
GitHub repo notes. Creating a CI/CD Pipeline with Jenkins and also EKS Clusters through AWS CloudFormation and deploys a Nginx image. <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/navigation-menu-css-from-codepen">
<h3 class="list-item-header">Navigation menu CSS from codepen</h3>
<p class="list-item-body">
Navigation menu https://codepen.io/kirstenhumphreys/pen/vgaKmG Nav Bar https://codepen.io/MilanMilosev/pen/GJbGJq <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/creating-virtual-environments-for-python">
<h3 class="list-item-header">Creating virtual environments for python</h3>
<p class="list-item-body">
Steps in creating a virtual environment for a python project <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/push-empty-git-to-check-ci">
<h3 class="list-item-header">Push empty git to check ci</h3>
<p class="list-item-body">
Sometimes, you need to push a commit to Git purely to check if some CI thing is working. The allow-empty flag lets you push a <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
<div class="cont contt">
<ul class="greed">
<li>
<a href="/blog/webscraping-with-python-for-data-collection">
<h3 class="list-item-header">Webscraping with Python for data collection</h3>
<p class="list-item-body">
What is webscraping <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</li>
</ul>
</div>
</div>
</body>
</html>
```
Simplified HTML
---------------
```css
/*-----------------------------------------------------------------------*/
/* 1. Common Style */
/*-----------------------------------------------------------------------*/
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
font: 400 14px / 1.8 "Whitney SSm A", "Whitney SSm B", "Helvetica Neue", "Helvetica", "Arial", sans-serif;
background: #FFFFFF;
color: #000;
padding-bottom: 10%;
}
.wrapper {
display: flex;
flex-wrap: wrap;
}
.cont {
flex: 0 0 100%;
padding: 20px;
position: relative;
}
a {
color: #FFFFFF;
text-decoration: none;
}
.list-item-header {
letter-spacing: 0.15em;
text-transform: uppercase;
font-size: 11px;
font-weight: 400;
margin: 0 0 4px;
color: #555A72;
}
.list-item-body {
font-size: 14px;
font-weight: 300;
margin: 10px 0 0;
color: rgba(85, 90, 114, 0.9);
overflow: hidden;
margin-top: 10px;
margin-bottom: 20px;
}
.list-item-link {
font-weight: 300;
font-size: 13px;
display: inline-block;
position: relative;
margin-top: 5px;
color: blue;
}
.cont::after {
content: "";
position: absolute;
top: 0;
left: 35px;
right: 0;
border-top: solid 1px rgba(35, 31, 32, 0.25);
}
@media (min-width: 801px) {
.cont {
flex: 0 0 33.33333%;
}
}
```
```html
<!doctype html>
<html xmlns="http://www.w3.org/1999/html">
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="icon" href="img/favicon.png" type="image/png">
<title>Blog posts</title>
<!-- Bootstrap CSS -->
<link rel="stylesheet" href="static/css/blog.css">
</head>
<body>
<div class="wrapper">
<div class="cont contt">
<a href="/blog/django-database-relationship">
<h3 class="list-item-header">Django database relationship</h3>
<p class="list-item-body">
ForeignKey is only one-to-one if you specify ForeignKey(Dude, unique=True), so with the above code you will get a Dude with multiple PhoneNumbers. <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</div>
<div class="cont contt">
<a href="/blog/django-database-relationship">
<h3 class="list-item-header">Django database relationship</h3>
<p class="list-item-body">
ForeignKey is only one-to-one if you specify ForeignKey(Dude, unique=True), so with the above code you will get a Dude with multiple PhoneNumbers. <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</div>
<div class="cont contt">
<a href="/blog/django-database-relationship">
<h3 class="list-item-header">Django database relationship</h3>
<p class="list-item-body">
ForeignKey is only one-to-one if you specify ForeignKey(Dude, unique=True), so with the above code you will get a Dude with multiple PhoneNumbers. <span class="list-item-link">
Read the post.
</span>
</p>
</a>
</div>
</div>
</body>
</html>
``` | I don't get why using that HTML strucure, it looks unnecessarily complicated!
Your problem is that the divs(.cont.contt) have a height and interacts with each other misaliging everything else, I don't think is a correct approach.
A partial solution might be to forse a height to 0 but is not clean at all.
I'd suggest to revist the strucure, maybe without using lists where not necessary or going directly with the [CSS Grid Layout](https://developer.mozilla.org/en-US/docs/Web/CSS/grid).
If I missed something let me know!
Have a nice day | 362 |
18,519,217 | When creating a string out of many substrings what is more pythonic - + or %?
```
big_string = string1 + string2 + ... + stringN
big_string = ''
for i in range(n):
big_string+=str(i)
```
or
```
big_string = "%s%s...%s" % (string1, string2, ... , stringN)
big_string = ''
for i in range(n):
big_string = "%s%s" % (big_string, str(i))
``` | 2013/08/29 | [
"https://Stackoverflow.com/questions/18519217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1445070/"
] | ```
big_string = ''.join([string1, string2, ..., stringN])
``` | `big_string = reduce(lambda x, y: x + y, [string1, string2, ..., stringN], "")` | 363 |
37,002,134 | I have just started to use Tensorflow and I have done "hello world" with my test.py file. Moving on to next step, I started to do tutorial(<https://www.tensorflow.org/versions/master/tutorials/mnist/beginners/index.html>).
This is what I have done
>
> $ git clone <https://github.com/tensorflow/tensorflow>
>
>
>
and run the file of "fully\_connected\_feed.py "
>
> python tensorflow/examples/tutorials/mnist/fully\_connected\_feed.py
>
>
>
I got the error like
>
> Traceback (most recent call last):
>
>
> File "tensorflow/examples/tutorials/mnist/fully\_connected\_feed.py",
>
>
> line 27, in
>
>
> from tensorflow.examples.tutorials.mnist import input\_data
>
>
> ImportError: No module named examples.tutorials.mnist
>
>
>
so I changed code from
>
> from tensorflow.examples.tutorials.mnist import input\_data
>
>
> from tensorflow.examples.tutorials.mnist import mnist
>
>
>
to
>
> import input\_data
>
>
> import mnist
>
>
>
but I got error again.
>
> Traceback (most recent call last):
>
>
> File "tensorflow/examples/tutorials/mnist/fully\_connected\_feed.py", line 27, in
>
>
> import input\_data
> File
>
>
> "/Users/naggi/Documents/ML/tensorflow/tensorflow/examples/tutorials/mnist/input\_data.py", line 29, in
>
>
> from tensorflow.contrib.learn.python.learn.datasets.mnist import read\_data\_sets
>
>
> ImportError: No module named contrib.learn.python.learn.datasets.mnist
>
>
>
Could someone help me?
Thanks | 2016/05/03 | [
"https://Stackoverflow.com/questions/37002134",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6285272/"
] | You were pretty close with the `drop` function, but I suggest you take a look at its documentation. It drops the given number of elements from the beginning of the list.
What you actually want is `take` the first one and `takeRight` the last one:
```
mp.mapValues(list => list.take(1) ++ list.takeRight(1))
```
This is pretty ugly, however. If you are certain that your values are always a 3-element list, I suggest pattern matching just as I showed with tuples:
```
mp.mapValues {
case List(first, _, third) => List(first, third)
}
``` | It looks like your map has lists of tuples, not lists of strings. Something like this should work:
```
m.mapValues { case List((a,b,c)) => (a,c) }
```
or
```
m.mapValues { case List((a,b,c)) => List((a,c)) }
```
or
```
m.mapValues { case List((a,b,c)) => List(a,c) }
```
... depending on what type of output you want to end up with. | 364 |
10,713,966 | I am trying to migrate over to Python from Matlab and can't figure out how to get interactive(?) plotting working within the Spyder IDE. My test code is shown below. With the .ion() nothing happens, I get a quick flash of a figure being drawn then the window instantly closes and spits out my Hello. Without the .ion() the figure is drawn correctly but the script hangs and doesn't spit out Hello until I manually close the figure window. I would like the script to run like a matlab script would and plot the various figures I ask it to while chugging along any computations and putting the output on the terminal(?) window.
I tried typing out the lines one at a time in ipython and it seemed to work but I would much rather work in a script sheet format where I can go back and forth between lines tweaking code.
I'm working in windows 7 if that helps. I installed python(x,y) and am launching spyder from there (spyder version 2.1.9). I have seen some similar-ish questions asked but I wasn't able to solve this problem. It seemed to me that someone said ipythons latest version is not compatible with spyder but then I saw another post that said interactive plotting should be supported regardless. Thanks for the help! If anyone has alternative environments I could use to mimick matlab behaviour that would work too, I'm really new to Python.
```
import matplotlib.pylab as plt
plt.ion()
plt.plot([1,2,3])
plt.show()
plt.ylabel('This is an axis')
print ("Hello")
``` | 2012/05/23 | [
"https://Stackoverflow.com/questions/10713966",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1411736/"
] | The run configuration should be set to *Execute in current Python or IPython interpreter* which by default allows for interactive plotting. If the interpreter is set to *Execute in a new dedicated Python interpreter* then *Interact with the Python interpreter after execution* must be selected. | in my case, these were the default settings in spyder however it still didn't show the plot until I typed: **%matplotlib inline**
Not sure if this is helpful but thought of sharing here. | 365 |
25,858,331 | Is there a meaningful difference between
```
if self.pk is not None:
```
and
```
if self.pk:
```
when checking a model field in python django?
Other languages have all kinds of differing 'correct' ways to check for a variable being null, empty, nonexistant, whatever.
a) I don't know how python handles the check
b) I don't know if this is important and / or meaningful in the context of django model fields | 2014/09/15 | [
"https://Stackoverflow.com/questions/25858331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3539965/"
] | The first check is checking that the primary key is not [`None`](https://docs.python.org/3/library/constants.html#None). The second is checking that the primary key is [truthy](https://docs.python.org/3/library/stdtypes.html#truth-value-testing). So yes, there is a difference. | `Pk` is a [property](https://github.com/django/django/blob/1.7/django/db/models/base.py#L515) that usually resolves to `id`. There is no magic other than that.
So the only difference between the two statements is how Python treats them. The first one explicitely tests if `pk` is None, whereas the second one will pass for any ["falsy" value](https://docs.python.org/release/2.5.2/lib/truth.html) of `pk`.
Note that `pk` shouldn't usually evaluate to `False` unless the model instance is not saved to the database, so in practice the two statements should be pretty much equivalent. | 366 |
4,290,399 | In other languages
```
for(i=0; i<10; i++){
if(...){
i = 4;
}
}
```
the loop will go up,
but in python,it doesn't work
```
for i in range(1, 11):
if ...:
i = 4
```
So can I go up in a loop with 'for'? | 2010/11/27 | [
"https://Stackoverflow.com/questions/4290399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/522057/"
] | The problem here is that `range(1, 11)` returns a list and `for...in` iterates over the list elements hence changing `i` to something else doesn't work as expected. Using a `while` loop should solve your problem. | Just some food for thought.
The for loop loops over an iterable. Create your own iterable that you can move forward yourself.
```
iterator = iter(range(11))
for i in iterator:
print 'for i = ', i
try:
print 'next()', iterator.next()
except StopIteration:
continue
>>> foo()
for i = 0
next() 1
for i = 2
next() 3
for i = 4
next() 5
for i = 6
next() 7
for i = 8
next() 9
for i = 10
next()
>>>
```
xrange() is an iterating version of range()
iterable = xrange(11) would behave as an iterator.
itertools provides nice functions like dropwhile <http://docs.python.org/library/itertools.html#itertools.dropwhile>
This can proceed your iterator for you.
```
from itertools import dropwhile
iterator = iter(range(11))
for i in iterator:
if i == 3:
i = dropwhile(lambda x: x<8, iterator).next()
print 'i = ', i
>>> foo()
i = 0
i = 1
i = 2
i = 8
i = 9
i = 10
>>>
```
dropwhile could be called outside your loop to create the iterator over your iteratator.
Then you can simply call next() on it. Since the for loop and the dropwhile are both calling next() on the same iterator you have some control over it.
You could also implement your own iterator that uses send() to allow you to manipulate the iterator.
<http://onlamp.com/pub/a/python/2006/10/26/python-25.html?page=2> | 367 |
53,158,284 | While trying to input my API key python is giving me a line too long code
```
E501: line too long
```
What I have is
```
notifications_client = NotificationsAPIClient(aaaaaaa_aaaaaaaa-11aa1a1a-aa11-111a-aaaa-11111aaa1a1a-aa11a1a1-0aa1-11a1-1111-1aa111a0a111)
```
For obvious reasons I have changed the API key to have only a's 1's and 0's but how can I break up this line of code so I no longer get this error? | 2018/11/05 | [
"https://Stackoverflow.com/questions/53158284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5348714/"
] | E501 is a linter error, not a Python interpreter error. Your code, in theory, should work just fine. If you want to prevent this error, simply break the value up (assuming it's a string ... you don't make that clear):
```
my_key = ('aaaaaaa_aaaaaaaa-11aa1a1a-aa11-111a-aaaa-'
'11111aaa1a1a-aa11a1a1-0aa1-11a1-1111-'
'1aa111a0a111')
notifications_client = NotificationsAPIClient(my_key)
``` | Use \ to break your line. Like;
notifications\_client = NotificationsAPIClient(aaaaaaa\_aaaaaaaa-11aa1a1a-\
aa11-111a-aaaa-11111aaa1a1a-\
aa11a1a1-0aa1-11a1-1111-1aa111a0a111) | 377 |
43,519,906 | I wrote a program that does the job, however it is not very pythonic, not pythonic and definitly not beautiful.
The program must concatenate two numpy arrays in the following manner:
As an example list0 and list1 are the input
```
list0 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
list1 = [ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
```
The output should look like the following:
```
[0, 2, 1, 3, 2, 4, 3, 5, 4, 6, 5, 7, 6, 8, 7, 9, 8, 10, 9, 11]
```
So basically put in the number of `list0` at every even point of the output, and put in the number of `list1` at every uneven point.
I am fairly new to python so I wrote it in a C-Style:
```
import numpy as np
list0 = np.arange(10)
list1 = np.arange(2,12)
new = []
cnt0 = 0
cnt1 = 0
for i in range(0,2*len(list0)):
if i % 2 == 0:
new.append(list0[cnt0])
cnt0 = cnt0 +1;
else:
new.append(list1[cnt1])
cnt1 = cnt1 +1;
```
Now I want to know if there is a more fancy, pythonic, faster way to achieve the same goal? | 2017/04/20 | [
"https://Stackoverflow.com/questions/43519906",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6786718/"
] | Being NumPy tagged, here's one with it -
```
np.vstack((list0, list1)).ravel('F').tolist()
```
[`ravel()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html) here flattens in `fortran` order with the `F` specifier.
A shorter version with [`np.c_`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.c_.html) that basically stacks the elements in columns -
```
np.c_[list0,list1].ravel().tolist()
```
`ravel()` here flattens in the default `C` order, so skipped here.
If the final output is to be kept as an array, skip the `.tolist()` from the approaches. | Nice one liner with itertools
```
from itertools import chain
chain(*zip(list0, list1))
[0, 2, 1, 3, 2, 4, 3, 5, 4, 6, 5, 7, 6, 8, 7, 9, 8, 10, 9, 11]
``` | 382 |
11,400,590 | I have a python dictionary consisting of JSON results. The dictionary contains a nested dictionary, which contains a nested list which contains a nested dictionary. Still with me? Here's an example:
```
{'hits':{'results':[{'key1':'value1',
'key2':'value2',
'key3':{'sub_key':'sub_value'}},
{'key1':'value3',
'key2':'value4',
'key3':{'sub_key':'sub_value2'}}
]}}
```
What I want to get from the dictionary is the `sub_vale` of each `sub_key` and store it in a different list. No matter what I try I keep getting errors.
This was my last attempt at it:
```
inner_list=mydict['hits']['results']#This is the list of the inner_dicts
index = 0
for x in inner_list:
new_dict[index] = x[u'sub_key']
index = index + 1
print new_dict
```
It printed the first few results then started to return everything in the original dictionary. I can't get my head around it. If I replace the `new_dict[index]` line with a `print` statement it prints to the screen perfectly. Really need some input on this!
```
for x in inner_list:
print x[u'sub_key']
``` | 2012/07/09 | [
"https://Stackoverflow.com/questions/11400590",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1046501/"
] | The most appropriate way is to override the `clean` method of your model:
```
from django.template import defaultfilters
class Article(models.Model):
...
def clean(self):
if self.slug.strip() == '':
self.slug = defaultfilters.slugify(self.title)
super(Article, self).clean()
```
This method will be called before the model is saved, and before any uniqueness checks are done, so if there's an issue, it will still be caught.
You can read about model's clean method [here](https://docs.djangoproject.com/en/dev/ref/models/instances/#django.db.models.Model.clean) | I would build it into the input form and use a ModelAdmin or ModelForm.
Admin Form:
```
from django.contrib import admin
class ArticleAdmin(admin.ModelAdmin):
prepopulated_fields = {'slug': ('title', )}
```
ModelForm:
```
class ArticleForm(forms.ModelForm):
class Meta:
model = Article
def clean_slug(self):
if !self.cleaned_data['slug'] :
self.cleaned_data['slug'] = slugify(self.title)
return True
```
again in that clean\_slug you may want to check to see if its unique first... and modify the slug to be unique if not. | 384 |
11,928,277 | I cant seem to install Rpy2 for python. Initially I ran across the problem where it displayed the following error.
```
Tried to guess R's HOME but no R command in the PATH.
```
But then I followed instructions in the following thread: [trouble installing rpy2 on win7 (R 2.12, Python 2.5)](https://stackoverflow.com/questions/4924917/trouble-installing-rpy2-on-win7-r-2-12-python-2-5)
where by I placed and copied all the files in R\R-2.12.1\bin\i386 to the R\R-2.12.1\bin and then set my environment path to point to R\R-2.12.1. Now trying to install it from source again..
```
python setup.py run
```
I get the same error. If I set the path variable to R\R-2.12.1\bin\ then I get the following error as showed by the person who gave the second answer
```
ValueError: Invalid substring in string
```
That thread went out of ideas so I thought a year from now if there are new ways to work around this.
EDIT = once
Thanks in advance | 2012/08/13 | [
"https://Stackoverflow.com/questions/11928277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1610626/"
] | Me too, I had many difficulties getting rpy2 up and running, even after following the crucial link in the answer from lgauthier. But, the final help came from one of the replies on that mailing list.
Summarized, these were the 4 steps needed to get rpy2 up and running on my Windows7 computer:
1. Install rpy2 from this link: <https://bitbucket.org/breisfeld/rpy2_w32_fix/issue/1/binary-installer-for-win32>
2. Add C:\Program Files\R\R-2.12.1\bin\i386 (the path to R.dll) to the environment variable PATH
3. Add an environment variable R\_HOME with C:\Program Files\R\R-2.12.1
4. Add an environment variable R\_USER with your Windows username
In case you don't know how to add/change environment variables, look e.g. here: <http://www.computerhope.com/issues/ch000549.htm> | Check the [rpy-mailing list](http://www.mail-archive.com/rpy-list@lists.sourceforge.net/msg03340.html) on July 18th. There is slight progress on the Windows front for rpy2, and people are reporting some success running it. | 387 |
62,686,320 | How do I stop from printing an extra input line? I'm new with python/coding
```
class1 = "Math"
class2 = "English"
class3 = "PE"
class4 = "Science"
class5 = "Art"
def get_input(className):
classInput = raw_input("Enter the score you received for " + className + ": ")
while int(classInput) >= 101 or int(classInput) <= -1:
print "Needs to be in the range 0 to 100"
classInput = raw_input("Enter the score you received for " + className + ": ")
return int(classInput)
def get_letter_grade(grade):
if grade >= 93:
return"A"
elif grade >= 90:
return"A-"
elif grade >= 87:
return"B+"
elif grade >= 83:
return"B"
elif grade >= 80:
return"B-"
elif grade >= 77:
return"C+"
elif grade >= 73:
return"C"
elif grade >= 70:
return"C-"
elif grade >= 67:
return"D+"
elif grade >= 63:
return"D"
elif grade >= 60:
return"D-"
else:
return"F"
print "Your " + class1 + " score is " + str(get_input(class1)) + ", you got a " +
get_letter_grade(get_input(class1))
```
Prints out:
```none
Enter the score you received for Math: 85
Enter the score you received for Math: 85
Your Math score is 85, you got a B
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62686320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13850019/"
] | Inside your print, you call `get_input()` method twice:
```
print "Your " + class1 + " score is " + str(get_input(class1)) + ", you got a " +
get_letter_grade(get_input(class1))
```
What you need to do is store your score by calling `get_input()` method once and use the stored value in print method:
```
score = get_input(class1)
print("Your " + class1 + " score is " + str(score) + ", you got a " +
get_letter_grade(score))
``` | I would separate out your calls to `get_input` from your print statement, not just here, but generally.
```
score = str(get_input(class1))
print "Your " + class1 + " score is " + score + ", you got a " +
get_letter_grade(score)
```
As a rule of thumb, any user input should almost always be immediately stored in a variable to be manipulated and/or used later. | 390 |
48,326,721 | I'm trying to mock **elasticsearch.Elasticsearch.indices.exists** function in my Python test case, but I'm getting the following import error. However, mock just **elasticsearch.Elasticsearch** was working fine.
```
@ddt
class TestElasticSearchConnector(unittest.TestCase):
@patch('elasticsearch.Elasticsearch.indices.exists')
@patch('connectors.elastic_search_connector.ElasticSearchConnector._get_local_conn')
def test_check_index(self, mock_es, _get_local_conn):
mock_es = Mock()
mock_es._index_exists = False
mock_es.indices.exists.return_value = True
mock_es.create.return_value = {'result': 'created'}
```
Getting the mock import error here
======================================================================
ERROR: test\_check\_index (tests.base.TestESConnector)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/user/.virtualenvs/my-prjlib/python3.6/site-packages/mock/mock.py", line 1197, in \_dot\_lookup
return getattr(thing, comp)
AttributeError: type object 'Elasticsearch' has no attribute 'indices'
```
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/user/.virtualenvs/my-prjlib/python3.6/site-packages/mock/mock.py", line 1297, in patched
arg = patching.__enter__()
File "/Users/user/.virtualenvs/my-prjlib/python3.6/site-packages/mock/mock.py", line 1353, in __enter__
self.target = self.getter()
File "/Users/user/.virtualenvs/my-prjlib/python3.6/site-packages/mock/mock.py", line 1523, in <lambda>
getter = lambda: _importer(target)
File "/Users/user/.virtualenvs/my-prjlib/python3.6/site-packages/mock/mock.py", line 1210, in _importer
thing = _dot_lookup(thing, comp, import_path)
File "/Users/user/.virtualenvs/my-prjlib/python3.6/site-packages/mock/mock.py", line 1199, in _dot_lookup
__import__(import_path)
ModuleNotFoundError: No module named 'elasticsearch.Elasticsearch'
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
```
Test import
```
>> user$ python
Python 3.6.1 (default, May 10 2017, 09:46:05)
[GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.42)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from elasticsearch import Elasticsearch
>>>
>>>
``` | 2018/01/18 | [
"https://Stackoverflow.com/questions/48326721",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1187968/"
] | This is legitimately an error. When specifying an attribute/method to mock, it must exist on the object (in this case a class). Perhaps you were expecting this attribute to exist, but it only present on the instantiated object.
```
In [1]: from elasticsearch import Elasticsearch
In [2]: Elasticsearch.indices
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-313eaaedb2f6> in <module>()
----> 1 Elasticsearch.indices
```
Indeed, it exists on an instantiated object:
```
In [3]: Elasticsearch().indices
Out[3]: <elasticsearch.client.indices.IndicesClient at 0x102db0a90>
``` | The Elasticserch library generates the `indices` attribute when you instantiante an `Elasticsearch()` object. And it does so using a class of the library called `IndicesClient`, and it is that class who has the `exists` method. Therefore, if you mock the response of that method of the `IndicesClient` class, you test should work.
Also, the input params of the function should be in reverse order with respoect of the decorators. If you put the `indices.exists` patch first, that should go second in the input to the function.
```
from elasticsearch.client import IndicesClient
@mock.patch.object(IndicesClient, 'exists')
@mock.patch('connectors.elastic_search_connector.ElasticSearchConnector._get_local_conn')
def test_check_index(self, mock_get_local_conn, mock_exists):
mock_exists.return_value = True
...
``` | 391 |
13,617,019 | I have a code with heavy symbolic calculations (many multiple symbolic integrals). Also I have access to both an 8-core cpu computer (with 18 GB RAM) and a small 32 cpu cluster. I prefer to remain on my professor's 8-core pc rather than to go to another professor's lab using his cluster in a more limited time, however, I'm not sure it will work on the SMP system, so I am looking for a *parallel tool* in **Python** that can be used on both **SMP** and **Clusters** and of course prefer the codes on one system to be **easily and with least effort** modifiable for use on the other system.
So far, I have found Parallel Python (PP) promising for my need, but I have recently told that MPI also does the same (pyMPI or MPI4py). I couldn't approve this as seemingly very little is discussed about this on the web, only [here](http://wiki.python.org/moin/ParallelProcessing) it is stated that MPI (both pyMPI or MPI4py) is usable for **clusters** only, if I am right about that "only"!
Is "Parallel Python" my only choice, or I can also happily use MPI based solutions? Which one is more promising for my needs?
**PS**. It seems none of them have very comprehensive documentations so if you know some links to other than their official websites that can help a newbie in parallel computation I will be so grateful if you would also mention them in your answer :)
---
**Edit**.
My code has two loops one inside the other, the **outer loop** cannot be parallelized as it is an iteration method (*a recursive solution*) each step depending on the values calculated within its previous step. The outer loop contains the *inner loop* alongside *3 extra equations* whose calculations depend on the whole results of the inner loop. However, the **inner loop** (which contains 9 out of 12 equations computable at each step) can be safely parallelized, all 3\*3 equations are independent w.r.t each other, only depending on the previous step. All my equations are so computationally heavy as each contains many multiple symbolic integrals. Seemingly I can parallelize both the **inner loop's 9 equations** and the **integration calculations in each of these 9 equation** separately, and also parallelize all the **integrations in other 3 equations alongside the inner loop**. You can find my code [**here**](http://ask.sagemath.org/question/1661/how-to-speed-up-a-code-containing-several-symbolic) if it can help you better understand my need, it is written inside *SageMath*. | 2012/11/29 | [
"https://Stackoverflow.com/questions/13617019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1631618/"
] | I would look in to `multiprocessing` [(doc)](http://docs.python.org/2/library/multiprocessing.html) which provides a bunch of nice tools for spawning and working with sub-processes.
To quote the documentation:
>
> multiprocessing is a package that supports spawning processes using an
> API similar to the threading module. The multiprocessing package
> offers both local and remote concurrency, effectively side-stepping
> the Global Interpreter Lock by using subprocesses instead of threads.
>
>
>
From the comments I think the `Pool` and it's `map` would serve your purposes [(doc)](http://docs.python.org/2/library/multiprocessing.html#module-multiprocessing.pool).
```
def work_done_in_inner_loop(arg):
# put your work code here
pass
p = Pool(9)
for o in outer_loop:
# what ever else you do
list_of_args = [...] # what your inner loop currently loops over
res = p.map(work_done_in_inner_loop,list_of_args])
# rest of code
``` | I recently ran into a similar problem. However, the following solution is only valid if (1) you wish to run the python script individually on a group of files, AND (2) each invocation of the script is independent of the others.
If the above applies to you, the simplest solution is to write a wrapper in bash along the lines of:
```
for a_file in $list_of_files
do
python python_script.py a_file &
done
```
The '&' will run the preceding command as a sub-process. The advantage is that bash will not wait for the python script to finish before continuing with the for loop.
You may want to place a cap on the number of processes running simultaneously, since this code will use all available resources. | 392 |
31,961,754 | I've got a python script that uses the **ansible** package to ping some remote servers. When executed manually (*python devmanager.py*) it works ok, but when the script is managed with **supervisor** it raises the following error:
```
Could not make dir /$HOME/.ansible/cp: [Errno 13] Permission denied: '/$HOME
```
The ansible command is quite simple:
```
runner = ansible.runner.Runner(
module_name='ping',
module_args='',
forks=10,
inventory=inventory
)
```
Same user in source and target systems. I've check permissions for the $HOME folder and didn't find anything weird.
Any idea what's is going on? Doesn't it know to translate the $HOME variable? | 2015/08/12 | [
"https://Stackoverflow.com/questions/31961754",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/315521/"
] | You may give a try by altering the parameter "remote\_tmp" in ansible.cfg.
Default:-`$HOME/.ansible/tmp`
Update:-`/tmp/.ansible/tmp`
On this case who ever the user try to run the playbook will have enough permission to create necessary temporary files in /tmp directory. | Yes, it seems that it doesn't escape the `$HOME` variable and tries to write under `/$HOME`. | 395 |
72,815,781 | I am getting below error when using geopandas and shapely
```
AttributeError: 'DataFrame' object has no attribute 'crs'
```
Below is the code:
```
#geometry = [Point(xy) for xy in zip(complete_major_accidents['longitude'], complete_major_accidents['latitude'])]
#crs='none'
geometry = gpd.points_from_xy(complete_nonmajor_accidents.longitude, complete_nonmajor_accidents.latitude)
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
#geometries = world['geometry'].apply(lambda x: x.wkt).values
#print(geometries)
#print(tuple(geometry))
gdf = GeoDataFrame(complete_major_accidents, geometry)
gdf
ax = world[world['name'] == 'United Kingdom'].plot(figsize=(15, 15))
#print(type(ax))
gdf.plot(ax = ax, marker='o', color='red', markersize=15, edgecolor='black')
#gdf.plot(ax=world.plot(figsize=(15, 15)), marker='o', color='red', markersize=15)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_330/1106976374.py in <module>
12 ax = world[world['name'] == 'United Kingdom'].plot(figsize=(15, 15))
13 #print(type(ax))
---> 14 gdf.plot(ax = ax, marker='o', color='red', markersize=15, edgecolor='black')
15 #gdf.plot(ax=world.plot(figsize=(15, 15)), marker='o', color='red', markersize=15)
~/.local/lib/python3.8/site-packages/geopandas/plotting.py in __call__(self, *args, **kwargs)
961 kind = kwargs.pop("kind", "geo")
962 if kind == "geo":
--> 963 return plot_dataframe(data, *args, **kwargs)
964 if kind in self._pandas_kinds:
965 # Access pandas plots
~/.local/lib/python3.8/site-packages/geopandas/plotting.py in plot_dataframe(df, column, cmap, color, ax, cax, categorical, legend, scheme, k, vmin, vmax, markersize, figsize, legend_kwds, categories, classification_kwds, missing_kwds, aspect, **style_kwds)
674
675 if aspect == "auto":
--> 676 if df.crs and df.crs.is_geographic:
677 bounds = df.total_bounds
678 y_coord = np.mean([bounds[1], bounds[3]])
~/.local/lib/python3.8/site-packages/pandas/core/generic.py in __getattr__(self, name)
5573 ):
5574 return self[name]
-> 5575 return object.__getattribute__(self, name)
5576
5577 def __setattr__(self, name: str, value) -> None:
AttributeError: 'DataFrame' object has no attribute 'crs'
``` | 2022/06/30 | [
"https://Stackoverflow.com/questions/72815781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10908274/"
] | I am finally able to resolve it by changing this below piece of code
```
gdf = GeoDataFrame(complete_major_accidents, geometry)
```
to
```
gdf = GeoDataFrame(complete_nonmajor_accidents, geometry = geometry)
``` | I got the same error after updating Geopandas from an older version. Following fix did the trick.
`self.ax = gpd.GeoDataFrame().plot(figsize=(18, 12))`
to
`self.ax = gpd.GeoDataFrame(geometry=[]).plot(figsize=(18, 12))` | 396 |
64,886,214 | I'm a new python developer and I watched a few tutorials on YouTube explaining the functions and the uses for this module, but I cannot get it to work. I installed the module via pip so I don't think that is the issue.
```
import urllib.request
x = urllib.request.urlopen('https://www.google.com')
print(x.read())
```
**Output:**
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 1342, in do\_open
h.request(req.get\_method(), req.selector, req.data, headers,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1255, in request
self.\_send\_request(method, url, body, headers, encode\_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1301, in \_send\_request
self.endheaders(body, encode\_chunked=encode\_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1250, in endheaders
self.\_send\_output(message\_body, encode\_chunked=encode\_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1010, in \_send\_output
self.send(msg)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 950, in send
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1424, in connect
self.sock = self.\_context.wrap\_socket(self.sock,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 500, in wrap\_socket
return self.sslsocket\_class.\_create(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 1040, in \_create
self.do\_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 1309, in do\_handshake
self.\_sslobj.do\_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE\_VERIFY\_FAILED] certificate verify failed: unable to get local issuer certificate (\_ssl.c:1122)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/mike/PycharmProjects/urllib/main.py", line 8, in
x = urllib.request.urlopen('https://www.google.com')
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 214, in urlopen
return opener.open(url, data, timeout)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 517, in open
response = self.\_open(req, data)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 534, in \_open
result = self.\_call\_chain(self.handle\_open, protocol, protocol +
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 494, in \_call\_chain
result = func(\*args)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 1385, in https\_open
return self.do\_open(http.client.HTTPSConnection, req,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 1345, in do\_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE\_VERIFY\_FAILED] certificate verify failed: unable to get local issuer certificate (\_ssl.c:1122)>
Process finished with exit code 1 | 2020/11/18 | [
"https://Stackoverflow.com/questions/64886214",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14558709/"
] | I gave the answer to this in the previous question you asked.You do not make any effort and do not follow the answers to the questions you ask.This is probably your homework or part of homework and you just wait for a solution.I hope you will share your works and then ask questions, this can be better for your improvement.
I am sharing my answer again because maybe it can help others and the previous question was deleted.I hope this answer works.
Please share your work before asking questions next time and follow your question and their answers otherwise you may not be able to solve your problem.
```
puzzle =["a","a","a"," ","b","b","b"]
## dont forget your input will start 0
## because you will pick an index
def move(puzzle):
move_index = int(input("enter an index for move right"))
for i in puzzle:
if move_index < len(puzzle):
if puzzle[move_index + 1] == " " and puzzle[move_index] == "a":
puzzle[move_index],puzzle[move_index+1] = puzzle[move_index +1 ],puzzle[move_index] ## if right index is free,move
return puzzle
if puzzle[move_index - 1] == " " and puzzle[move_index] == "b" :
puzzle[move_index], puzzle[move_index - 1] = puzzle[move_index - 1], puzzle[move_index]
return puzzle
if puzzle[move_index - 2] == " " and puzzle[move_index-1] == "a" and puzzle[move_index] == "b":
puzzle[move_index], puzzle[move_index - 2] = puzzle[move_index - 2], puzzle[move_index]
return puzzle
if puzzle[move_index + 2] == " " and puzzle[move_index + 1 ] == "b" and puzzle[move_index] == "a" :
puzzle[move_index], puzzle[move_index + 2] = puzzle[move_index + 2], puzzle[move_index]
return puzzle
if move_index == len(puzzle): ## you can move last element only for your conditons
puzzle.append(puzzle.pop(move_index-1)) ## switch for last and first
puzzle.insert(0, puzzle.pop())
return puzzle ## updated puzzle
else:
return puzzle
def game():
is_game_continue = int(input("Do you want continue ? (1) Yes (2) No")) ## for enter new moves
return is_game_continue
while game() == 1: ## if user want continue
## you can add other options like return value != 1
current_puzzle = move(puzzle)
print(current_puzzle)
``` | You should not use `from` as a variable since it's a reserved keyword in python.
Maybe that's what causing your problem. | 397 |
31,969,540 | My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want `python`, `py_compile`, `pylint` &c to completely ignore the experimental stuff at the end.
I am looking for something like `#if 0` for `cpp`.
**How can this be done?**
Here are some ideas and the reasons they are bad:
1. `sys.exit(0)`: works for `python` but not `py_compile` and `pylint`
2. put all experimental code under `def test():`: I can no longer copy/paste the code into a `python` REPL because it has non-trivial indent
3. put all experimental code between lines with `"""`: emacs no longer indents and fontifies the code properly
4. comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
5. put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is `pyspark`. | 2015/08/12 | [
"https://Stackoverflow.com/questions/31969540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/850781/"
] | Use `ipython` rather than `python` for your REPL It has better code completion and introspection and when you paste indented code it can automatically "de-indent" the pasted code.
Thus you can put your experimental code in a test function and then paste in parts without worrying and having to de-indent your code.
If you are pasting large blocks that can be considered individual blocks then you will need to use the `%paste` or `%cpaste` magics.
eg.
```
for i in range(3):
i *= 2
# with the following the blank line this is a complete block
print(i)
```
With a normal paste:
```
In [1]: for i in range(3):
...: i *= 2
...:
In [2]: print(i)
4
```
Using `%paste`
```
In [3]: %paste
for i in range(10):
i *= 2
print(i)
## -- End pasted text --
0
2
4
In [4]:
```
### PySpark and IPython
>
> It is also possible to launch PySpark in IPython, the enhanced Python interpreter. PySpark works with IPython 1.0.0 and later. To use IPython, set the IPYTHON variable to 1 when running bin/pyspark:[1](https://spark.apache.org/docs/0.9.0/python-programming-guide.html)
>
>
>
> ```
> $ IPYTHON=1 ./bin/pyspark
>
> ```
>
> | I suggest you use a proper version control system to keep the "real" and the "experimental" parts separated.
For example, using Git, you could only include the real code without the experimental parts in your commits (using [`add -p`](https://git-scm.com/book/en/v2/Git-Tools-Interactive-Staging#Staging-Patches)), and then temporarily [`stash`](https://git-scm.com/book/en/v1/Git-Tools-Stashing) the experimental parts for running your various tools.
You could also keep the experimental parts in their own branch which you then [`rebase`](https://git-scm.com/book/en/v2/Git-Branching-Rebasing) on top of the non-experimental parts when you need them. | 399 |
19,167,550 | My code goes through a number of files reading them into lists with the command:
```
data = np.loadtxt(myfile, unpack=True)
```
Some of these files are empty (I can't control that) and when that happens I get this warning printed on screen:
```
/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py:795: UserWarning: loadtxt: Empty input file: "/path_to_file/file.dat"
warnings.warn('loadtxt: Empty input file: "%s"' % fname)
```
How can I prevent this warning from showing? | 2013/10/03 | [
"https://Stackoverflow.com/questions/19167550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1391441/"
] | You will have to wrap the line with `catch_warnings`, then call the `simplefilter` method to suppress those warnings. For example:
```
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
data = np.loadtxt(myfile, unpack=True)
```
Should do it. | One obvious possibility is to pre-check the files:
```
if os.fstat(myfile.fileno()).st_size:
data = np.loadtxt(myfile, unpack=True)
else:
# whatever you want to do for empty files
``` | 409 |
22,345,798 | I currently have a working python code in command line. How can I convert this into a GUI program. I know how to design a GUI(make buttons,callback function, create text field, label widget...). My question is how should be the GUI connected to the existing program. *should I make a python file called gui.py and import this in the main program..
..or should it be in the other way...*
eg:
```
n = int(raw_input('enter an integer: '))
def fx(n):
result = ''
for i in xrange(1,11):
result += "{} x {} = {}\n".format(i,n,i*n)
return result
print fx(n)
```
the above program will print the multiplication table of an integer. How should be the gui program(with a entry box, button widget, text widget were o/p will be printed). should this program call the GUI code or should I include this code (**fx()** function) in the **GUI class**. | 2014/03/12 | [
"https://Stackoverflow.com/questions/22345798",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2332665/"
] | As the GUI is the user front-end, and because your function already exists, the easiest is to make GUI class to import the function. On event, the GUI would call the function and handle the display to the user.
In fact, it's exactly what you have done with a Command-Line Interface (CLI) in your example code :) | I would say the answer strongly depends on your choice of GUI-framework to use. For a small piece of code like the one you posted you probably may want to rely on "batteries included" tkinter. In this case I agree to the comment of shaktimaan to simply include the tkinter commands in your existing code. But you have many choices like PyQT, PySide, kivy... All these frameworks have possiblities to seperate programlogic from GUI-view-code, but have different ways to achieve this.
So read about these frameworks if you're not satisfied with tkinter and make a choice, then you can ask again how to do this seperation if you're not sure. | 410 |
63,580,623 | Right now I'm sitting on a blank file which consists only of the following:
```
import os
import sys
import shlex
import subprocess
import signal
from time import monotonic as timer
```
I get this error when I try to run my file: ImportError: Cannot import name monotonic
If it matters, I am on linux and my python ver is 2.7.16 - I can't really change any of this because I'm working from my school server... What exactly is causing the error? | 2020/08/25 | [
"https://Stackoverflow.com/questions/63580623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10847907/"
] | You'll need to use regular Producer and execute the serialization functions yourself
```
from confluent_kafka import avro
from confluent_kafka.avro import CachedSchemaRegistryClient
from confluent_kafka.avro.serializer.message_serializer import MessageSerializer as AvroSerializer
avro_serializer = AvroSerializer(schema_registry)
serialize_avro = avro_serializer.encode_record_with_schema # extract function definition
value_schema = avro.load('avro_schemas/value.avsc') # TODO: Create avro_schemas folder
p = Producer({'bootstrap.servers': bootstrap_servers})
value_payload = serialize_avro(topic, value_schema, value, is_key=False)
p.produce(topic, key=key, value=value_payload, callback=delivery_report)
``` | `AvroProducer` assumes that both keys and values are encoded with the schema registry, prepending a magic byte and the schema id to the payload of both the key and the value.
If you want to use a custom serialization for the key, you could use a `Producer` instead of an `AvroProducer`. But it will be your responsibility to serialize the key (using whatever format you want) and the values (which means encoding the value and prepending the magic byte and the schema id). To find out how this is done you can look at the `AvroProducer` code.
But it also means you'll have to write your own `AvroConsumer` and won't be able to use the `kafka-avro-console-consumer`. | 411 |
69,833,702 | I keep running into this use and I haven't found a good solution. I am asking for a solution in python, but a solution in R would also be helpful.
I've been getting data that looks something like this:
```
import pandas as pd
data = {'Col1': ['Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '']}
df = pd.DataFrame(data)
Col1
0 Bob
1 101
3
4 Sue
5 102
6 Second Street
7
8 Alex
9 200
10 Third Street
11
```
The pattern in my real data does repeat like this. Sometimes there is a blank row (or more than 1), and sometimes there are not any blank rows. The important part here is that I need to convert this column into a row.
I want the data to look like this.
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
I have tried playing around with this, but nothing has worked. My thought was to iterate through a few rows at a time, assign the values to the appropriate column, and just build a data frame row by row.
```
x = len(df['Col1'])
holder = pd.DataFrame()
new_df = pd.DataFrame()
while x < 4:
temp = df.iloc[:5]
holder['Name'] = temp['Col1'].iloc[0]
holder['Address'] = temp['Col1'].iloc[1]
holder['Street'] = temp['Col1'].iloc[2]
new_df = pd.concat([new_df, holder])
df = temp[5:]
df.reset_index()
holder = pd.DataFrame()
x = len(df['Col1'])
new_df.head(10)
``` | 2021/11/04 | [
"https://Stackoverflow.com/questions/69833702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14167846/"
] | In `R`,
```
data <- data.frame(
Col1 = c('Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '')
)
k<-which(grepl("Street", data$Col1) == TRUE)
j <- k-1
i <- k-2
data.frame(
Name = data[i,],
Adress = data[j,],
Street = data[k,]
)
Name Adress Street
1 Bob 101 First Street
2 Sue 102 Second Street
3 Alex 200 Third Street
```
Or, if `Street` not ends with `Street` but `Adress` are always a number, you can also try
```
j <- which(apply(data, 1, function(x) !is.na(as.numeric(x)) ))
i <- j-1
k <- j+1
``` | ### Python3
In Python 3, you can convert your DataFrame into an array and then reshape it.
```py
n = df.shape[0]
df2 = pd.DataFrame(
data=df.to_numpy().reshape((n//4, 4), order='C'),
columns=['Name', 'Address', 'Street', 'Empty'])
```
This produces for your sample data this:
```
Name Address Street Empty
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
If you like you can remove the last column:
```py
df2 = df2.drop(['Empty'], axis=1)
```
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
### One-liner code
```
df2 = pd.DataFrame(data=df.to_numpy().reshape((df.shape[0]//4, 4), order='C' ), columns=['Name', 'Address', 'Street', 'Empty']).drop(['Empty'], axis=1)
```
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
``` | 412 |
56,746,773 | I had a college exercise which contains a question which asked to write a function which returns how many times a particular key repeats in an object in python. after researching on dictionaries I know that python automatically ignores duplicate keys only keeping the last one. I tried to loop over each key the conventional way:
```
dictt = {'a' : 22, 'a' : 33, 'c' : 34, 'd' : 456}
lookFor = 'a'
times = 0
for k,v in dictt.items():
if k == lookFor:
times = times + 1
```
This would return 1. even if I check the length of the dictionary it shows 3 meaning only one of the key 'a' was counted. | 2019/06/25 | [
"https://Stackoverflow.com/questions/56746773",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9096030/"
] | Just to mention other options note that you can use the `filter` function here:
```
julia> filter(row -> row.a == 2, df)
1×2 DataFrame
│ Row │ a │ b │
│ │ Int64 │ String │
├─────┼───────┼────────┤
│ 1 │ 2 │ y │
```
or
```
julia> df[filter(==(2), df.a), :]
1×2 DataFrame
│ Row │ a │ b │
│ │ Int64 │ String │
├─────┼───────┼────────┤
│ 1 │ 2 │ y │
``` | Fortunately, you only need to add one character: `.`. The `.` character enables broadcasting on any Julia function, even ones like `==`. Therefore, your code would be as follows:
```
df = DataFrame(a=[1,2,3], b=["x", "y", "z"])
df2 = df[df.a .== 2, :]
```
Without the broadcast, the clause `df.a == 2` returns `false` because it's literally comparing the Array [1,2,3], as a whole unit, to the scalar value of 2. An Array of shape (3,) will never be equal to a scalar value of 2, without broadcasting, because the sizes are different. Therefore, that clause just returns a single `false`.
The error you're getting tells you that you're trying to access the DataFrame at index `false`, which is not a valid index for a DataFrame with 3 rows. By broadcasting with `.`, you're now creating a Bool Array of shape (3,), which is a valid way to index a DataFrame with 3 rows.
For more on broadcasting, see the official Julia documentation [here](https://docs.julialang.org/en/v1/manual/functions/#man-vectorized-1). | 419 |
38,212,340 | I am trying to extract all those tags whose class name fits the regex pattern frag-0-0, frag-1-0, etc. from [this link](http://de.vroniplag.wikia.com/wiki/Aak/002)
I am trying to retrieve it using the following code
```
driver = webdriver.Chrome(chromedriver)
for frg in frgs:
driver.get(URL + frg[1:])
frags=driver.find_elements_by_id(re.compile('frag-[0-9]-0'))
for frag in frags:
for tag in frag.find_elements_by_css_selector('[class^=fragmark]'):
lst.append([tag.get_attribute('class'), tag.text])
driver.quit()
return lst
```
But I get an error. What is the right way of doing this?
The error is as follows:
```
Traceback (most recent call last):
File "vroni.py", line 119, in <module>
op('Aaf')
File "vroni.py", line 104, in op
plags=getplags(cd)
File "vroni.py", line 95, in getplags
frags=driver.find_elements_by_id(re.compile('frag-[0-9]-0'))
File "/home/eadaradhiraj/Documents/webscrape/venv/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 281, in find_elements_by_id
return self.find_elements(by=By.ID, value=id_)
File "/home/eadaradhiraj/Documents/webscrape/venv/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 778, in find_elements
'value': value})['value']
File "/home/eadaradhiraj/Documents/webscrape/venv/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 234, in execute
response = self.command_executor.execute(driver_command, params)
File "/home/eadaradhiraj/Documents/webscrape/venv/local/lib/python2.7/site-packages/selenium/webdriver/remote/remote_connection.py", line 398, in execute
data = utils.dump_json(params)
File "/home/eadaradhiraj/Documents/webscrape/venv/local/lib/python2.7/site-packages/selenium/webdriver/remote/utils.py", line 34, in dump_json
return json.dumps(json_struct)
File "/usr/lib/python2.7/json/__init__.py", line 243, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: <_sre.SRE_Pattern object at 0xb668b1b0> is not JSON serializable
``` | 2016/07/05 | [
"https://Stackoverflow.com/questions/38212340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6213939/"
] | Try to remove DownloadCachePluginBootstrap.cs and FilePluginBootstrap.cs just leave manual setup inside InitializeLastChance(). It seems that there is a problem with loading order. | As @Piotr mentioned:
>
> Try to remove DownloadCachePluginBootstrap.cs and FilePluginBootstrap.cs just
> leave manual setup inside InitializeLastChance(). It seems that there is a
> problem with loading order.
>
>
>
That fixed the issue for me as well.
I just want to share my code in the Setup.cs of the iOS project because I think that's a better implementation. I didn't use **InitializeLastChance()**. Instead, I used **AddPluginsLoaders** and **LoadPlugins**.
```
protected override void AddPluginsLoaders(MvxLoaderPluginRegistry registry)
{
registry.Register<MvvmCross.Plugins.File.PluginLoader, MvvmCross.Plugins.File.iOS.Plugin>();
registry.Register<MvvmCross.Plugins.DownloadCache.PluginLoader, MvvmCross.Plugins.DownloadCache.iOS.Plugin>();
base.AddPluginsLoaders(registry);
}
public override void LoadPlugins(IMvxPluginManager pluginManager)
{
pluginManager.EnsurePluginLoaded<MvvmCross.Plugins.File.PluginLoader>();
pluginManager.EnsurePluginLoaded<MvvmCross.Plugins.DownloadCache.PluginLoader>();
base.LoadPlugins(pluginManager);
}
``` | 420 |
44,206,346 | How can I stop pgadmin 4 process?
I ran pgadmin 4 next method:
`python3 /usr/local/pgAdmin4.py`
My idea using Ctrl-c. | 2017/05/26 | [
"https://Stackoverflow.com/questions/44206346",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8071434/"
] | If you are using pgAdmin 4 on mac OS or Ubuntu, you can use system tool bar (at the top of the screen) icon for this. After you start pgAdmin server the icon with elephant head should appear. If you click it you will have an option `Shut down server`. | You can shut down the server[![enter image description here](https://i.stack.imgur.com/qzpud.png)](https://i.stack.imgur.com/qzpud.png) from the top menu as shown.
Just click the Shutdown server and it will work. | 421 |
74,495,864 | I have a huge list of sublists, each sublist consisting of a tuple and an int. Example:
```
[[(1, 1), 46], [(1, 2), 25.0], [(1, 1), 25.0], [(1, 3), 19.5], [(1, 2), 19.5], [(1, 4), 4.5], [(1, 3), 4.5], [(1, 5), 17.5], [(1, 4), 17.5], [(1, 6), 9.5], [(1, 5), 9.5]]
```
I want to create a unique list of those tuples corresponding to the sum of all those integer values using python. For the example above, my desired output looks like this:
```
[[(1, 1), 71], [(1, 2), 44.5], [(1, 3), 24], [(1, 4), 22], [(1, 5), 27], [(1, 6), 9.5]]
```
Could I get some help on how to do this?
I have tried to use dictionaries to solve this problem, but I keep running into errors, as I am not too familiar with how to use them. | 2022/11/18 | [
"https://Stackoverflow.com/questions/74495864",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20543467/"
] | From the helpfile you can read:
>
> If there is a header and the first row contains one fewer field than the number of columns, the first column in the input is used for the row names. Otherwise if **row.names is missing, the rows are numbered**.
>
>
>
That explains the same behavior when you set row.names=NULL or when you use its default value.
You can set row.names as in this example:
```
df <- read.table(text="V1 V2
ENSG00000000003.15 2
ENSG00000000005.6 0
ENSG00000000419.14 21
ENSG00000000457.14 0
ENSG00000000460.17 2
ENSG00000000938.13 0", header=TRUE, row.names=letters[1:6])
```
which displays:
```
V1 V2
a ENSG00000000003.15 2
b ENSG00000000005.6 0
c ENSG00000000419.14 21
d ENSG00000000457.14 0
e ENSG00000000460.17 2
f ENSG00000000938.13 0
``` | The first two executions are functionally the same, when you don't use row.names parameter of read.table, it's assumed that its value is NULL.
The third one fails because `1` is interpreted as being a vector with length equal to the number of rows filled with the value 1. Hence the error affirming you can't have two rows with the same name.
What you're doing with `row.names=1` is equivalent trying to do:
```
test <- read.table(text="X Y
1 2
3 4", header=TRUE)
row.names(test) = c(1,1)
```
It gives the same Error.
If you want to name your rows `R1:RX` why not try something like this:
```
ak1a = read.table("/Users/abhaykanodia/Desktop/smallRNA/AK1a_counts.txt")
row.names(ak1a) = paste("R",1:dim(ak1a)[1],sep="")
``` | 422 |
10,104,805 | I have installed python 32 package to the
>
> C:\python32
>
>
>
I have also set the paths:
>
> PYTHONPATH | C:\Python32\Lib;C:\Python32\DLLs;C:\Python32\Lib\lib-tk;
>
>
> PATH ;C:\Python32;
>
>
>
I would like to use the "2to3" tool, but CMD does not recognize it.
```
CMD: c:\test\python> 2to3 test.py
```
Should i add an extra path for "2to3" or something?
Thanks | 2012/04/11 | [
"https://Stackoverflow.com/questions/10104805",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1318239/"
] | 2to3 is actually a Python script found in the Tools/scripts folder of your Python install.
So you should run it like this:
```
python.exe C:\Python32\Tools\scripts\2to3.py your-script-here.py
```
See this for more details: <http://docs.python.org/library/2to3.html> | You can set up 2to3.py to run as a command when you type 2to3 by creating a batch file in the same directory as your python.exe file (assuming that directory is already on your windows path - it doesn't have to be this directory it just is a convenient, relatively logical spot).
Lets assume you have python installed in `C:\Python33`. If you aren't sure where your python installation is, you can find out where Windows thinks it is by typing `where python` from the command line.
You should have `python.exe` in `C:\Python33` and `2to3.py` in `C:\Python33\Tools\Scripts`.
Create a batch file called `2to3.bat` in `C:\Python33\Scripts` and put this line in the batch file
```
@python "%~dp0\..\Tools\Scripts\2to3.py" %*
```
The `%~dp0` is the location of the batch file, in this case `c:\Python33\Scripts` and the `%*` passes all arguments from the command line to the `2to3.py` script. After you've saved the .bat file, you should be able to type `2to3` from the command line and see
```
At least one file or directory argument required.
Use --help to show usage.
```
I have found this technique useful when installing from setup.py, because sometimes the setup script expects 2to3 to be available as a command. | 423 |
8,576,104 | Just for fun, I've been using `python` and `gstreamer` to create simple Linux audio players. The first one was a command-line procedural script that used gst-launch-0.10 playbin to play a webstream. The second version was again procedural but had a GUI and used playbin2 to create the gstreamer pipeline. Now I'm trying to create a fully OOP version.
My first step was to put the gstreamer code in a module of its own and save it as 'player.py':
```
#!/usr/bin/env python
# coding=utf-8
"""player.py"""
import glib, pygst
pygst.require("0.10")
import gst
class Player():
def __init__(self):
self.pipeline = gst.Pipeline("myPipeline")
self.player = gst.element_factory_make("playbin2", "theplayer")
self.pipeline.add(self.player)
self.audiosink = gst.element_factory_make("autoaudiosink", 'audiosink')
self.audiosink.set_property('async-handling', True)
self.player.set_property("uri", "http://sc.grupodial.net:8086")
self.pipeline.set_state(gst.STATE_PLAYING)
if __name__ == "__main__":
Player()
glib.MainLoop().run()
```
(Please note that this is a very simple experimental script that automatically loads and plays a stream. In the final application there will be specific methods of Player to take care of URI/file selection and play/pause/stop reproduction.)
The file was marked as executable and the following command made it run fine, the webstream being loaded and played:
```
$ python player.py
```
However, trying to run it directly (using the shebang directive) returned
```
$ ./player.py
: No such file or directory
```
Anyway, having made it work as a standalone script I wrote the following "main" application code to import the player module and create an instance of Player:
```
#!/usr/bin/env python
# coding=utf-8
"""jukebox3.py"""
import glib
import player
def main():
myplayer = player.Player()
# remove these later:
print myplayer.pipeline
print myplayer.player
print myplayer.audiosink
print myplayer.player.get_property("uri")
print myplayer.pipeline.get_state()
if __name__ == "__main__":
main()
glib.MainLoop().run()
```
Running this main script either through the interpreter or directly produces **no sound at all** though I believe the instance is created because the printing statements output information consistent with playbin2 behavior:
```
/GstPipeline:myPipeline (gst.Pipeline)
/GstPipeline:myPipeline/GstPlayBin2:theplayer (__main__.GstPlayBin2)
/GstAutoAudioSink:audiosink (__main__.GstAutoAudioSink)
http://sc.grupodial.net:8086
(<enum GST_STATE_CHANGE_SUCCESS of type GstStateChangeReturn>, <enum GST_STATE_PLAYING of type GstState>, <enum GST_STATE_VOID_PENDING of type GstState>)
```
BTW, the result is the same using either `glib.MainLoop` or `gtk.main` to create the main loop.
Any suggestions what am I missing? Or, is this scheme possible at all? | 2011/12/20 | [
"https://Stackoverflow.com/questions/8576104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1106979/"
] | If you use this, you will pass the element value as param.
```
javascript:checkStatus('{$k->bus_company_name}','{$k->bus_id}','{$k->bus_time}',document.getElementById('dt').value)
```
But you also can get inside the function checkStatus. | Since you're looping through a list of items, I would recommend using the current index at each iteration to create a unique date ID. You can then pass this to your script and get the element's value by ID there:
```
{foreach name = feach key = i item = k from = $allBuses}
{$k->bus_company_name}<br />
A/C {$k->bus_is_ac}<br />
Date : <input type="text" name="date" id="dt_{$i}" />yyyy/mm/dd
<a href="javascript:checkStatus('{$k->bus_company_name}','{$k->bus_id}','{$k->bus_time}','dt_{$i}')">Status</a>
{/foreach}
<script>
function checkStatus(name, id, time, date_id){
var date = document.getElementById(date_id);
if(date){
alert(date.value);
// Do something fancy with the date
}
}
</script>
``` | 426 |
29,476,054 | I have a list of things I want to filter out of a csv, and I'm trying to figure out a pythonic way to do it. EG, this is what I'm doing:
```
with open('output.csv', 'wb') as outf:
with open('input.csv', 'rbU') as inf:
read = csv.reader(inf)
outwriter = csv.writer(outf)
notstrings = ['and', 'or', '&', 'is', 'a', 'the']
for row in read:
(if none of notstrings in row[3])
outwriter(row)
```
I don't know what to put in the parentheses (or if there's a better overall way to go about this). | 2015/04/06 | [
"https://Stackoverflow.com/questions/29476054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2898989/"
] | You can use the [`any()` function](https://docs.python.org/2/library/functions.html#any) to test each of the words in your list against a column:
```
if not any(w in row[3] for w in notstrings):
# none of the strings are found, write the row
```
This will be true if *none* of those strings appear in `row[3]`. It'll match *substrings*, however, so `false-positive` would be a match for `'a' in 'false-positive` for example.
Put into context:
```
with open('output.csv', 'wb') as outf:
with open('input.csv', 'rbU') as inf:
read = csv.reader(inf)
outwriter = csv.writer(outf)
notstrings = ['and', 'or', '&', 'is', 'a', 'the']
for row in read:
if not any(w in row[3] for w in notstrings):
outwriter(row)
```
If you need to honour word boundaries then a regular expression is going to be a better idea here:
```
notstrings = re.compile(r'(?:\b(?:and|or|is|a|the)\b)|(?:\B&\B)')
if not notstrings.search(row[3]):
# none of the words are found, write the row
```
I created a [Regex101 demo](https://regex101.com/r/oK1hD2/2) for the expression to demonstrate how it works. It has two branches:
* `\b(?:and|or|is|a|the)\b` - matches any of the words in the list provided they are at the start, end, or between non-word characters (punctuation, whitespace, etc.)
* `\B&\B` - matches the `&` character if at the start, end, or between non-word characters. You can't use `\b` here as `&` is itself not a word character. | You can use sets. In this code, I transform your list into a set. I transform your `row[3]` into a set of words and I check the intersection between the two sets. If there is not intersection, that means none of the words in notstrings are in `row[3]`.
Using sets, you make sure that you match only words and not parts of words.
```
with open('output.csv', 'wb') as outf:
with open('input.csv', 'rbU') as inf:
read = csv.reader(inf)
outwriter = csv.writer(outf)
notstrings = set(['and', 'or', '&', 'is', 'a', 'the'])
for row in read:
if not notstrings.intersection(set(row[3].split(' '))):
outwriter(row)
``` | 427 |
62,514,068 | I am trying to develop a AWS lambda to make a `rollout restart deployment` using the python client. I cannot find any implementation in the github repo or references. Using the -v in the `kubectl rollout restart` is not giving me enough hints to continue with the development.
Anyways, it is more related to the python client:
<https://github.com/kubernetes-client/python>
Any ideas? perhaps I could be missing something | 2020/06/22 | [
"https://Stackoverflow.com/questions/62514068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13791762/"
] | The python client interacts directly with the Kubernetes API. Similar to what `kubectl` does. However, `kubectl` added some utility commands which contain logic that is not contained in the Kubernetes API. Rollout is one of those utilities.
In this case that means you have two approaches. You could reverse engineer the API calls the [kubectl rollout restart](https://github.com/kubernetes/kubectl/blob/master/pkg/cmd/rollout/rollout_restart.go) makes. Pro tip: With go, you can actually import internal Kubectl behaviour and libraries, making this quite easy. So consider writing your lambda in golang.
Alternatively, you can have your Lambda call the Kubectl binary (using the process exec libraries in python). However, this does mean you need to include the binary in your lambda in some way (either by uploading it with your lambda or by building a lambda layer containing `kubectl`). | @Andre Pires, it can be done like this way :
```
data := fmt.Sprintf(`{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"%s"}}}},"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxUnavailable":"%s","maxSurge": "%s"}}}`, time.Now().String(), "25%", "25%")
newDeployment, err := clientImpl.ClientSet.AppsV1().Deployments(item.Pod.Namespace).Patch(context.Background(), deployment.Name, types.StrategicMergePatchType, []byte(data), metav1.PatchOptions{FieldManager: "kubectl-rollout"})
``` | 428 |
51,314,875 | Seems fairly straight forward but whenever I try to merely import the module I get this:
```
from pptx.util import Inches
from pptx import Presentation
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\parts\image.py in <module>()
12 try:
---> 13 from PIL import Image as PIL_Image
14 except ImportError:
~\AppData\Local\Continuum\anaconda3\lib\site-packages\PIL\Image.py in <module>()
59 # and should be considered private and subject to change.
---> 60 from . import _imaging as core
61 if PILLOW_VERSION != getattr(core, 'PILLOW_VERSION', None):
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-82a968e5e132> in <module>()
----> 1 from pptx.util import Inches
2 from pptx import Presentation
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\__init__.py in <module>()
11 del sys
12
---> 13 from pptx.api import Presentation # noqa
14
15 from pptx.opc.constants import CONTENT_TYPE as CT # noqa: E402
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\api.py in <module>()
15
16 from .opc.constants import CONTENT_TYPE as CT
---> 17 from .package import Package
18
19
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\package.py in <module>()
14 from .opc.packuri import PackURI
15 from .parts.coreprops import CorePropertiesPart
---> 16 from .parts.image import Image, ImagePart
17 from .parts.media import MediaPart
18 from .util import lazyproperty
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\parts\image.py in <module>()
13 from PIL import Image as PIL_Image
14 except ImportError:
---> 15 import Image as PIL_Image
16
17 from ..compat import BytesIO, is_string
ModuleNotFoundError: No module named 'Image'
```
Can anyone help me to overcome this error, or possibly show me a better library to accomplish this? I'm more than happy to provide any info that would help someone to help me debug this.
I know very little on the modules. Aside from using anaconda prompt, I know nothing. | 2018/07/12 | [
"https://Stackoverflow.com/questions/51314875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9459261/"
] | I've finally figured it out by creating a small app and trying to reproduce it. As Dmitry and Paulo have pointed out, it should work. However, it should work for any new project and in my case the project is 10 years old and has lots of legacy configurations.
**TL;DR:** The `async`/`await` keywords do not work very well (the `HttpContext.Current` will be null after calling `await`) if this setting is **not** present in the web.config:
```
<httpRuntime targetFramework="4.6.1" />
```
That is a shortcut for a bunch of settings, including this one (which is the one I care here):
```
<configuration>
<appSettings>
<add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" />
</appSettings>
</configuration>
```
Everything is explained in detail here: <https://blogs.msdn.microsoft.com/webdev/2012/11/19/all-about-httpruntime-targetframework/>
For reference, it says:
>
> **<add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" />**
>
>
> Enables the new await-friendly asynchronous pipeline that was
> introduced in 4.5. Many of our synchronization primitives in earlier
> versions of ASP.NET had bad behaviors, such as taking locks on public
> objects or violating API contracts. In fact, ASP.NET 4’s
> implementation of SynchronizationContext.Post is a blocking
> synchronous call! The new asynchronous pipeline strives to be more
> efficient while also following the expected contracts for its APIs.
> The new pipeline also performs a small amount of error checking on
> behalf of the developer, such as detecting unanticipated calls to
> async void methods.
>
>
> Certain features like WebSockets require that this switch be set.
> Importantly, the behavior of async / await is undefined in ASP.NET
> unless this switch has been set. (Remember: setting `<httpRuntime
> targetFramework="4.5" />` is also sufficient.)
>
>
>
If that settings is not present at all, then version 4.0 is assumed and it works in 'quirks'-mode:
>
> If there is no <httpRuntime targetFramework> attribute present in Web.config, we assume that the application wanted 4.0 quirks behavior.
>
>
> | For retrieving files in `ASP.NET Core` try using [`IFileProvider`](https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.fileproviders.ifileprovider) instead of `HttpContext` - see [File Providers in ASP.NET Core](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/file-providers) documentation for more details about configuring and injecting it via `DI`.
If that is the `POST` controller action to upload multiple files and receive other data - you can do it this way. **Below for demo purposes I use `View` but data can just go from anywhere as API POST request**.
**View**
```
@model MyNamespace.Models.UploadModel
<form asp-controller="MyController" asp-action="Upload" enctype="multipart/form-data" method="post">
<input asp-for="OtherProperty">
<input name="Files" multiple type="file">
<button type="submit" class="btn btn-success">Upload</button>
</form>
```
**Model** - note that files are passed as [`IFormFile`](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.iformfile) objects
```
public class UploadModel
{
public List<IFormFile> Files { get; set; }
public string OtherProperty { get; set; }
}
```
**Controller**
```
[HttpGet]
public IActionResult Upload()
{
return View(new UploadModel());
}
[HttpPost]
public async Task<IActionResult> Index(UploadModel model)
{
var otherProperty = model.OtherProperty;
var files = new Dictionary<string, string>();
foreach (IFormFile file in model.Files)
{
using (var reader = new StreamReader(file.OpenReadStream()))
{
string content = await reader.ReadToEndAsync();
files.Add(file.Name, content);
// Available file properties:
// file.FileName
// file.ContentDisposition
// file.ContentType
// file.Headers
// file.Length
// file.Name
// You can copy file to other stream if needed:
// file.CopyTo(new MemoryStream()...);
}
}
}
``` | 429 |
35,796,968 | I have a python GUI application. And now I need to know what all libraries the application links to. So that I can check the license compatibility of all the libraries.
I have tried using strace, but strace seems to report all the packages even if they are not used by the application.
And, I tried python ModuleFinder but it just returns the modules that are inside python2.7 and not system level packages that are linked.
So is there any way I can get all the libraries that are linked from my application? | 2016/03/04 | [
"https://Stackoverflow.com/questions/35796968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2109788/"
] | You can give a try to the library
<https://github.com/bndr/pipreqs>
found following the guide
<https://www.fullstackpython.com/application-dependencies.html>
---
The library `pipreqs` is pip installable and automatically generates the file `requirements.txt`.
It contains all the imports libraries with versions you are using in the virtualenv or in the python correctly installed.
Just type:
```
pip install pipreqs
pipreqs /home/project/location
```
It will print:
```
INFO: Successfully saved requirements file in /home/project/location/requirements.txt
```
In addition it is compatible with the *pip install -r* command: if you need to create a venv of your project, or update your current python version with compatible libraries, you just need to type:
```
pip install -r requirements.txt
```
I had the same problem and this library solved it for me. Not sure if it works for multiple layers of dependencies i.e. in case you have nested level of dependent libraries.
-- Edit 1:
If looking for a more sophisticated **version manager**, please consider as well pyvenv <https://github.com/pyenv/pyenv>. It wraps `virtualenv` producing some improvements over the version specification that is created by `pipreqs`.
-- Edit 2:
If, after creating the file with the dependency libraries of your module with `pipreqs`, you want to pin the whole dependency tree, take a look at `pip-compile`. It figures out a way to get the dependencies of your top level libraries, and it pins them in a new requirement files, indicating the dependency tree.
-- Edit 2:
If you want to split your dependency tree into different files (e.g. base, test, dev, docs) and have a way of managing the dependency tree, please take a look at `pip-compile-multi`. | Install yolk for python2 with:
```
pip install yolk
```
Or install yolk for python3 with:
```
pip install yolk3k
```
Call the following to get the list of eggs in your environment:
```
yolk -l
```
Alternatively, you can use [snakefood](http://furius.ca/snakefood/) for graphing your dependencies, as answered in [this question](https://stackoverflow.com/questions/508277/is-there-a-good-dependency-analysis-tool-for-python).
You could try going into the site-packages folder where the unpacked eggs are stored, and running this:
```
ls -l */LICENSE*
```
That will give you a list of the licence files for each project (if they're stored in the root of the egg, which they usually are). | 430 |
36,075,407 | I'm developing python flask app.
I have a problem mysqldb.
If I type 'import MySQLdb' on python console.
It show "ImportError: No module named 'MySQLdb' "
On my computer MySQL-python installed and running on <http://127.0.0.1:5000/>
How can I solve this problem? | 2016/03/18 | [
"https://Stackoverflow.com/questions/36075407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5736099/"
] | If you are using Python **2.x**, one of the following command will install `mysqldb` on your machine:
```
pip install mysql-python
```
or
```
easy_install mysql-python
``` | **for python 3.x install**
pip install mysqlclient | 433 |
37,691,320 | Im very new to `c` and am trying to make a `while` loop that checks if the parameter is less than or equal to a certain number but also if it is greater than or equal to a different number as well. I usually code in `python` and this is example of what I'm looking to do in `c`:
`while(8 <= x <= 600)` | 2016/06/08 | [
"https://Stackoverflow.com/questions/37691320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5355216/"
] | ```
while (x >= 8 && x <= 600){
}
``` | The relational and equality operators (`<`, `<=`, `>`, `>=`, `==`, and `!=`) don't work like that in C. The expression `a <= b` will evaluate to 1 if the condition is true, 0 otherwise. The operator is *left-associative*, so `8 <= x <= 600` will be evaluated as `(8 <= x) <= 600`. `8 <= x` will evaluate to 0 or 1, both of which are less than 600, so the result of the expression is always 1 (true).
To check if `x` falls within a range of values, you have to do two separate comparisons: `8 <= x && x <= 600` (or `8 > x || x > 600`) | 435 |
69,090,032 | Using Python.
I have two data frames
df1:
```
email timezone country_app_web
0 nhvfstdfg@vxc.com Europe/Paris NaN
1 taifoor096@gmail.com NaN FR
2 nivo1996@gmail.com US/Eastern NaN
3 jorgehersan90@gmail.com NaN UK
4 syeager2@cox.net NaN NaN
```
df2:
```
email country
0 008023@abpat.qld.edu.au AU
1 0081634947@fanaticsgsiorder.com AU
2 008farhan05@gmail.com ID
3 00bronzy@gmail.com AU
4 00monstar@gmail.com AU
```
I want to check using python and add column country in df1
Problem1: if email in df1 is present in df2, if yes then return the value of a column "country" present in df2 to matched email in df1
problem 2: for the remaning unmatched emails , need to check if the country\_web\_app in df1 has any value corresponding to the unmatched email if yes then return the country\_\_web\_app values into country column of df1
problem 3: Similarly for remaning unmatched email after problem 2, need to check if the timezone in df1 has any value corresponding to the unamtched email if yes then return the timezone value into country column of df1 | 2021/09/07 | [
"https://Stackoverflow.com/questions/69090032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16677735/"
] | if you want to remove all object in `products`
use this
```
db.collection.update({},
{
$set: {
products: {}
}
})
```
<https://mongoplayground.net/p/aBSnpRhblxt>
if you want to delete specific key (gCx5qSTLvdWeel8E2Yo7m) from product use this
```
db.collection.update({},
{
$unset: {
"products.gCx5qSTLvdWeel8E2Yo7m": undefined
}
})
```
<https://mongoplayground.net/p/z6xRyh3oJrs> | Thank you for your answer Mohammad but I think this works for MongoDB, but in mongoose, we need to set the value as 1 to remove the item with unset.
Here is my working example
```js
const { ids } = req.body;
try {
const order = await Order.findById(req.params.id).populate('user', 'name').exec();
if (!order) {
return res.status(404).json({ errors: [{ msg: 'Vous ne pouvez pas fermer une commande déjà fermée' }] });
}
console.log(ids);
const un: {
[key:string]: number,
} = {};
if (ids) {
for (let i = 0; i < ids.length; i += 1) {
const e = ids[i];
un[`products.${e}`] = 1;
}
}
console.log(un);
const changedOrder = await Order.updateOne({ id: req.params.id }, {
$unset: un,
}, { new: true }).populate('user', 'name');
console.log(changedOrder);
res.json(changedOrder);
} catch (err) {
console.log(err);
res.status(500).json({ errors: [{ msg: 'Server Error' }] });
}
``` | 438 |
61,746,984 | I have a script which has been simplified to provide me with a sequence of numbers.
I have run this under windows 10, using both Python3.6 and Python3.8
If the script is run with the line the line : pal\_gen.send(10 \*\* (digits)) commented out, I get what I expected. But I want to change the sequence when num % 10 = 0.
The script:
```
def infinite_pal():
num = 0
while True:
#print(f"num= {str(num)}")
if num % 10 ==0:
#if num==20: print(f"Have num of {str(num)}")
i = (yield num)
#if num==20: print(i)
if i is not None:
num = i
#print(f"i = {str(i)} num= {str(num)}")
num += 1
if num==112: break
pal_gen = infinite_pal()
for i in pal_gen:
print(i)
digits = len(str(i))
#print(f"result = {str(10 ** (digits))}")
pal_gen.send(10 ** (digits))
```
gives 0, 30
I would have expected: 0, 10, 20, 20, 20 etc.
When num has the value of 20, the yield expression appears to be called, but the value 20 is never sent to the calling for i in pal\_gen loop. The num value does get upto 30 and is yielded. 30 should not appear.
Have I totally misunderstood the effect of the .send
Many thanks. I can do this another way but I am puzzled why the above does not work.
From an earlier question, [python generator yield statement not yield](https://stackoverflow.com/questions/59327603/python-generator-yield-statement-not-yield), I tried - but it still does not give what I would expect:
```
def infinite_pal():
num = 0
while True:
if num % 10 ==0:
#if num==20: print(f"Have num of {str(num)}")
i = (yield num)
#if num==20: print(i)
if i is not None:
num = i
#print(f"i = {str(i)} num= {str(num)}")
num += 1
pal_gen = infinite_pal()
i = pal_gen.send(None)
while True:
print(i)
digits = len(str(i))
#print(f"result = {str(10 ** (digits))}")
i=pal_gen.send(10 ** (digits))
if i>200: break
``` | 2020/05/12 | [
"https://Stackoverflow.com/questions/61746984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5467308/"
] | I don't know why you expect result `0, 10, 20, 20, 20` if you send `10`, `100`, `1000`, `10000`
In second version you have to send
```
i = pal_gen.send(10*(digits-1))
```
but it will gives endless `20` so if you expect other values then it will need totally different code.
---
```
def infinite_pal():
num = 0
while True:
if num % 10 ==0:
i = yield num
if i is not None:
num = i
num += 1
pal_gen = infinite_pal()
i = pal_gen.send(None)
while True:
print(i)
digits = len(str(i))
i = pal_gen.send(10*(digits-1))
## `i` never will be bigger then `20` so next lines are useless
#if i > 200:
# break
``` | Many thanks for the above comments. In case anyone else is new to generators in Python, I make the following comments. The first example came from a web site (2 sites in fact) that supposedly explained Python generators. I appreciate there was an error in the .send parameter, but my real concern was why the first approach did not work. I made the comment:
"When num has the value of 20 in the generator, the yield expression appears to be called, but the value 20 is never sent to the calling for i in pal\_gen loop", ie print(i) never displayed 20.
I know that the generator yielded 20 because when I uncommented the line in the generator:
```
#if num==20: print(f"Have num of {str(num)}")
```
20 was displayed.
At the time I did not realise that .send also gets the yielded values, so the variable i in print(i) in the for loop only received every second yielded value.
The second example solved this problem although the calculation for .send parameter was incorrect. | 439 |
45,851,791 | I am running the docker image for snappydata v0.9. From inside that image, I can run queries against the database. However, I cannot do so from a second server on my machine.
I copied the python files from snappydata to the installed pyspark (editing snappysession to SnappySession in the imports) and (based on the answer to [Unable to connect to snappydata store with spark-shell command](https://stackoverflow.com/questions/38921733/unable-to-connect-to-snappydata-store-with-spark-shell-command/38926794#38926794)), I wrote the following script (it is a bit of cargo-cult programming as I was copying from the python code in the docker image -- suggestions to improve it are welcome):
```
import pyspark
from pyspark.context import SparkContext
from pyspark.sql import SparkSession, SQLContext
from pyspark.sql.snappy import SnappyContext
from pyspark.storagelevel import StorageLevel
SparkContext._ensure_initialized()
spark = SparkSession.builder.appName("test") \
.master("local[*]") \
.config("snappydata.store.locators", "localhost:10034") \
.getOrCreate()
spark.sql("SELECT col1, min(col2) from TABLE1")
```
However, I get a traceback with:
```
pyspark.sql.utils.AnalysisException: u'Table or view not found: TABLE1
```
I have verified with wireshark that my program is communicating with the docker image (TCP follow stream shows the traceback message and a scala traceback). My assumption is that the permissions in the snappydata cluster is set wrong, but grepping through the logs and configuration did not show anything obvious.
How can I proceed?
-------- Edit 1 ------------
The new code that I am running (still getting the same error), incorporating the suggestions for the change in the config and ensuring that I get a SnappySession is:
```
from pyspark.sql.snappy import SnappySession
snappy = SnappySession.builder.appName("test") \
.master("local[*]") \
.config("spark.snappydata.connection", "localhost:1527") \
.getOrCreate()
snappy.sql("SELECT col1, min(col2) from TABLE1")
``` | 2017/08/24 | [
"https://Stackoverflow.com/questions/45851791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/767565/"
] | Try this:
```
from random import randint
print( "You rolled " + ",".join(str(randint(1,6)) for j in range(6)) )
``` | If you're using python 3, which it appears you are, you could very simply print like that printing "you rolled" and then the numbers one at a time with the print argument 'end' set to a blank string
```
print("You rolled ", end='')
for i in range(6):
print(str(random.randint(1,6)), end='')
if i < 5:
print(", ", end='')
``` | 440 |
10,656,147 | I figured out how to run my Django application via `sudo python /home/david/myproject/manage.py runserver 68.164.125.221:80`. However, after I quit terminal, the server stops running.
I tried to run this process in the background, but the server just shuts down quickly after I execute `sudo python /home/david/myproject/manage.py runserver 68.164.125.221:80 &`.
How do I keep my Django application running even after I quit my ssh session in terminal?
PS - Sorry if this question strikes you as elementary. Such sillyness ensues when a front-end javascript programmer must turn into a server administrator in break-neck speed. | 2012/05/18 | [
"https://Stackoverflow.com/questions/10656147",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/795319/"
] | Meet [screen](http://www.gnu.org/software/screen/).
Connect through ssh, start screen. This open a virtual console emulator on top of the one provided by ssh. Start your server there.
Then press Ctrl-a, then d. This detach the screen session, keeping it running in the background.
To [R]e-attach to it, use screen -r.
If screen is not installed and you can't install it, you can also start an application in the background by adding a & to the command, as you tried. But you should not close the terminal window then ; just disconnect, with the bash command exit, or Ctrl-d.
The advantage of screen is that you can still read the output from the server, in case there is an error or anything.
Screen is a really powerful tool, with many more commands. You can add a new virtual window with Ctrl-a, then c (for Create) ; switch through windows with Ctrl-a, then n (next) or p (previous), ...
But you need it to be installed to use it. Since you seem to have root access, this shouldn't be a problem.
EDIT: [tmux](https://tmux.github.io/) is another great solution for the same use-case. | Use `screen` to create a new virtual window, and run the server there.
```
$ screen
$ python manage.py runserver
```
You will see that Django server has started running.
Now press `Ctrl+A` and then press the `D` key to detach from that screen. It will say:
```
$ [detached from ###.pts-0.hostname]
```
You can now safely logout from your terminal, log back in to your terminal, do other bits of coding in other directories, go for a vacation, do whatever you want.
---
To return to the screen that you have detached from,
```
$ screen -r
```
To kill the django server now, simply press `Ctrl+C` like you would've done normally.
---
To `terminate` this current screen instead of `detaching` from this screen, use `Ctrl+D`. It will say:
```
$ [screen is terminating]
$
``` | 442 |
34,086,062 | today I'm updated the elastic search from 1.6 to 2.1, because 1.6 is vulnerable version, after this update my website not working, give this error :
```
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from app import app, db
File "/opt/project/app/__init__.py", line 30, in <module>
es.create_index(app.config['ELASTICSEARCH_INDEX'])
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 93, in decorate
return func(*args, query_params=query_params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 1033, in create_index
query_params=query_params)
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 285, in send_request
self._raise_exception(status, error_message)
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 299, in _raise_exception
raise error_class(status, error_message)
pyelasticsearch.exceptions.ElasticHttpError: (400, u'index_already_exists_exception')
make: *** [run] Error 1
```
the code is this :
```
redis = Redis()
es = ElasticSearch(app.config['ELASTICSEARCH_URI'])
try:
es.create_index(app.config['ELASTICSEARCH_INDEX'])
except IndexAlreadyExistsError, e:
pass
```
where is wrong ? what is new on this new version ? | 2015/12/04 | [
"https://Stackoverflow.com/questions/34086062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5544303/"
] | `jeuResultats.next();` moves your result to the next row. You start with 0th row, i.e. when you call `.next()` it reads the first row, then when you call it again, it tries to read the 2nd row, which does not exist.
*Some additional hints, not directly related to the question:*
1. Java Docs are a good place to start [Java 8 ResultSet](http://docs.oracle.com/javase/8/docs/api/java/sql/ResultSet.html), for e.x., perhaps `ResultSet.first()` method may be more suited for your use.
2. Since you are working with resources, take a look at try-with-resources syntax. [Official tutorials](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) are a good starting point for that.
3. Also take a look at prepared statement vs Statement. Again, [official guide](https://docs.oracle.com/javase/tutorial/jdbc/basics/prepared.html) is a good place to start | Make the below changes in you code. Currently the next() method is shifting result list to fetch the data at 1st index, whereas the data is at the 0th Index:
```
boolean result = false;
try{
result = jeuResultats.next();
} catch (SQLException e) {
e.printStackTrace();
}
if (!result) {
loadJSP("/index.jsp", request, reponse);
}else {
loadJSP("/views/menu.jsp", request, reponse);
}
``` | 445 |
48,074,568 | as part of Unity's ML Agents images fed to a reinforcement learning agent can be converted to greyscale like so:
```
def _process_pixels(image_bytes=None, bw=False):
s = bytearray(image_bytes)
image = Image.open(io.BytesIO(s))
s = np.array(image) / 255.0
if bw:
s = np.mean(s, axis=2)
s = np.reshape(s, [s.shape[0], s.shape[1], 1])
return s
```
As I'm not familiar enough with Python and especially numpy, how can I get the dimensions right for plotting the reshaped numpy array? To my understanding, the shape is based on the image's width, height and number of channels. So after reshaping there is only one channel to determine the greyscale value. I just didn't find a way yet to plot it yet.
Here is a link to the mentioned code of the [Unity ML Agents repository](https://github.com/Unity-Technologies/ml-agents/blob/master/python/unityagents/environment.py#L176).
That's how I wanted to plot it:
```
plt.imshow(s)
plt.show()
``` | 2018/01/03 | [
"https://Stackoverflow.com/questions/48074568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3515869/"
] | Won't just doing this work?
```
plt.imshow(s[..., 0])
plt.show()
```
Explanation
`plt.imshow` expects either a 2-D array with shape `(x, y)`, and treats it like grayscale, or dimensions `(x, y, 3)` (treated like RGB) or `(x, y, 4)` (treated as RGBA). The array you had was `(x, y, 1)`. To get rid of the last dimension we can do Numpy indexing to remove the last dimension. `s[..., 0]` says, "take all other dimensions as-is, but along the last dimension, get the slice at index 0". | It looks like the grayscale version has an extra single dimension at the end. To plot, you just need to collapse it, e.g. with [`np.squeeze`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.squeeze.html):
```
plt.imshow(np.squeeze(s))
``` | 448 |
38,510,140 | What is the difference between a list & a stack in python?
I have read its explanation in the python documentation but there both the things seems to be same?
```
>>> stack = [3, 4, 5]
>>> stack.append(6)
>>> stack.append(7)
>>> stack
[3, 4, 5, 6, 7]
>>> stack.pop()
7
>>> stack
[3, 4, 5, 6]
>>> stack.pop()
6
>>> stack.pop()
5
>>> stack
[3, 4]
``` | 2016/07/21 | [
"https://Stackoverflow.com/questions/38510140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6621144/"
] | A stack is a *data structure concept*. The documentation uses a Python `list` object to implement one. That's why that section of the tutorial is named *Using Lists as Stacks*.
Stacks are just things you add stuff to, and when you take stuff away from a stack again, you do so in reverse order, first in, last out style. Like a stack of books or hats or... *beer crates*:
[![beer crate stacking](https://i.stack.imgur.com/9jJBd.jpg)](https://www.youtube.com/watch?v=9SReWtHt68A)
See the [Wikipedia explanation](https://en.wikipedia.org/wiki/Stack_(abstract_data_type)).
Lists on the other hand are far more versatile, you can add and remove elements anywhere in the list. You wouldn't try that with a stack of beer crates with someone on top!
You could implement a stack with a custom class:
```
from collections import namedtuple
class _Entry(namedtuple('_Entry', 'value next')):
def _repr_assist(self, postfix):
r = repr(self.value) + postfix
if self.next is not None:
return self.next._repr_assist(', ' + r)
return r
class Stack(object):
def __init__(self):
self.top = None
def push(self, value):
self.top = _Entry(value, self.top)
def pop(self):
if self.top is None:
raise ValueError("Can't pop from an empty stack")
res, self.top = self.top.value, self.top.next
return res
def __repr__(self):
if self.top is None: return '[]'
return '[' + self.top._repr_assist(']')
```
Hardly a list in sight (somewhat artificially), but it is definitely a stack:
```
>>> stack = Stack()
>>> stack.push(3)
>>> stack.push(4)
>>> stack.push(5)
>>> stack
[3, 4, 5]
>>> stack.pop()
5
>>> stack.push(6)
>>> stack
[3, 4, 6]
>>> stack.pop()
6
>>> stack.pop()
4
>>> stack.pop()
3
>>> stack
[]
```
The Python standard library doesn't come with a specific stack datatype; a `list` object does just fine. Just limit any use to `list.append()` and `list.pop()` (the latter with no arguments) to treat a list *as* a stack.
You could also use the [`collections.deque()` type](https://docs.python.org/3/library/collections.html#collections.deque); it is usually slightly faster than a list for the typical patterns seen when using either as a stack. However, like lists, a deque can be used for other purposes too. | A "stack" is a specific application of `list`, with operations limited to appending (pushing) to and popping (pulling) from the end. | 449 |
18,971,162 | I am trying to create a simple python calculator for an assignment. The basic idea of it is simple and documented all over online, but I am trying to create one where the user actually inputs the operators. So instead of printing 1: addition, 2: subtraction, etc, the user would select + for addition, - for subtraction, etc. I am also trying to make Q or q quit the program.
Any ideas for how to allow the user to type operators to represent the operation?
Note: I know I still need to define my remainder operation.
```
import math
loop = 1
choice = 0
while loop == 1:
print("your options are:")
print("+ Addition")
print("- Subtraction")
print("* Multiplication")
print("/ Division")
print("% Remainder")
print("Q Quit")
print("***************************")
choice = str(input("Choose your option: "))
if choice == +:
ad1 = float(input("Add this: "))
ad2 = float(input("to this: "))
print(ad1, "+", ad2, "=", ad1 + ad2)
elif choice == -:
su2 = float(input("Subtract this: "))
su1 = float(input("from this: "))
print(su1, "-", su2, "=", su1 - su2)
elif choice == *:
mu1 = float(input("Multiply this: "))
mu2 = float(input("with this: "))
print(mu1, "*", mu2, "=", mu1 * mu2)
elif choice == /:
di1 = float(input("Divide this: "))
di2 = float(input("by this: "))
print(di1, "/", di2, "=", di1 / di2)
elif choice == Q:
loop = 0
print("Thank-you for using calculator")
``` | 2013/09/24 | [
"https://Stackoverflow.com/questions/18971162",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2809161/"
] | First off, you don't need to assign `choice` to zero
Second, you have your code right, but you need to put quotes around the operators in your if statements like this
```
if choice == '+':
```
to show that you are checking for a string
make your loop like this:
```
while 1: #or while True:
#do stuff
elif choice == 'Q': #qoutes around Q
break #use the `break` keyword to end the while loop
```
then, you don't need to assign `loop` at the top of your program | You should try replacing `if choice == +` by `if choice == "+"`.
What you're getting from the input is actually a string, which means it can contain any character, even one that represents an operator. | 454 |