content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
What is the time complexity of my algorithm using a for loop?
I have trouble with that exercise, how can I write it in Python, and get time complexity? Should use a while loop?
Write an algorithm that returns the smallest value in the array A[1 . . . n]. Use a while loop. What is the time complexity of your algorithm?
list1 = []
num = int(input("Enter number of elements in list: "))
for i in range(1, num + 1):
ele = int(input("Enter elements: "))
list1.append(ele)
print("Smallest element is:", min(list1))
A:
How to get the time complexity?
Most of the time when someone is asking you for time complexity they aren't asking for exact time complexity, they are asking for an approximate estimate in terms of Big O Notation. I highly recommend you check out this wiki but in short, Big O Notation asks "For 'n' elements how many steps will your algorithm take in the worst case?"
So if we look at your algorithm, we can see that there is a list of 'n' elements. Now in the worst case the smallest element is the last element of the list so in this case your algorithm would have to search through all 'n' elements to find the lowest number. In this case your Big O would be O(n) or linear time. As 'n' grows your algorithm time will take a linearly increasing amount of time to execute.
Should I use while loop?
Technically speaking it doesn't matter too much, but it sounds like they may be asking you to use a while loop so you can get more experience using while loops.
| What is the time complexity of my algorithm using a for loop? | I have trouble with that exercise, how can I write it in Python, and get time complexity? Should use a while loop?
Write an algorithm that returns the smallest value in the array A[1 . . . n]. Use a while loop. What is the time complexity of your algorithm?
list1 = []
num = int(input("Enter number of elements in list: "))
for i in range(1, num + 1):
ele = int(input("Enter elements: "))
list1.append(ele)
print("Smallest element is:", min(list1))
| [
"How to get the time complexity?\nMost of the time when someone is asking you for time complexity they aren't asking for exact time complexity, they are asking for an approximate estimate in terms of Big O Notation. I highly recommend you check out this wiki but in short, Big O Notation asks \"For 'n' elements how many steps will your algorithm take in the worst case?\"\nSo if we look at your algorithm, we can see that there is a list of 'n' elements. Now in the worst case the smallest element is the last element of the list so in this case your algorithm would have to search through all 'n' elements to find the lowest number. In this case your Big O would be O(n) or linear time. As 'n' grows your algorithm time will take a linearly increasing amount of time to execute.\nShould I use while loop?\nTechnically speaking it doesn't matter too much, but it sounds like they may be asking you to use a while loop so you can get more experience using while loops.\n"
] | [
2
] | [] | [] | [
"algorithm",
"python"
] | stackoverflow_0074677687_algorithm_python.txt |
Q:
Further optimizing the ISING model
I've implemented the 2D ISING model in Python, using NumPy and Numba's JIT:
from timeit import default_timer as timer
import matplotlib.pyplot as plt
import numba as nb
import numpy as np
# TODO for Dict optimization.
# from numba import types
# from numba.typed import Dict
@nb.njit(nogil=True)
def initialstate(N):
'''
Generates a random spin configuration for initial condition
'''
state = np.empty((N,N),dtype=np.int8)
for i in range(N):
for j in range(N):
state[i,j] = 2*np.random.randint(2)-1
return state
@nb.njit(nogil=True)
def mcmove(lattice, beta, N):
'''
Monte Carlo move using Metropolis algorithm
'''
# # TODO* Dict optimization
# dict_param = Dict.empty(
# key_type=types.int64,
# value_type=types.float64,
# )
# dict_param = {cost : np.exp(-cost*beta) for cost in [-8, -4, 0, 4, 8] }
for _ in range(N):
for __ in range(N):
a = np.random.randint(0, N)
b = np.random.randint(0, N)
s = lattice[a, b]
dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N]
cost = 2*s*dE
if cost < 0:
s *= -1
#TODO* elif np.random.rand() < dict_param[cost]:
elif np.random.rand() < np.exp(-cost*beta):
s *= -1
lattice[a, b] = s
return lattice
@nb.njit(nogil=True)
def calcEnergy(lattice, N):
'''
Energy of a given configuration
'''
energy = 0
for i in range(len(lattice)):
for j in range(len(lattice)):
S = lattice[i,j]
nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]
energy += -nb*S
return energy/2
@nb.njit(nogil=True)
def calcMag(lattice):
'''
Magnetization of a given configuration
'''
mag = np.sum(lattice, dtype=np.int32)
return mag
@nb.njit(nogil=True)
def ISING_model(nT, N, burnin, mcSteps):
"""
nT : Number of temperature points.
N : Size of the lattice, N x N.
burnin : Number of MC sweeps for equilibration (Burn-in).
mcSteps : Number of MC sweeps for calculation.
"""
T = np.linspace(1.2, 3.8, nT);
E,M,C,X = np.zeros(nT), np.zeros(nT), np.zeros(nT), np.zeros(nT)
n1, n2 = 1.0/(mcSteps*N*N), 1.0/(mcSteps*mcSteps*N*N)
for temperature in range(nT):
lattice = initialstate(N) # initialise
E1 = M1 = E2 = M2 = 0
iT = 1/T[temperature]
iT2= iT*iT
for _ in range(burnin): # equilibrate
mcmove(lattice, iT, N) # Monte Carlo moves
for _ in range(mcSteps):
mcmove(lattice, iT, N)
Ene = calcEnergy(lattice, N) # calculate the Energy
Mag = calcMag(lattice,) # calculate the Magnetisation
E1 += Ene
M1 += Mag
M2 += Mag*Mag
E2 += Ene*Ene
E[temperature] = n1*E1
M[temperature] = n1*M1
C[temperature] = (n1*E2 - n2*E1*E1)*iT2
X[temperature] = (n1*M2 - n2*M1*M1)*iT
return T,E,M,C,X
def main():
N = 32
start_time = timer()
T,E,M,C,X = ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4)
end_time = timer()
print("Elapsed time: %g seconds" % (end_time - start_time))
f = plt.figure(figsize=(18, 10)); #
# figure title
f.suptitle(f"Ising Model: 2D Lattice\nSize: {N}x{N}", fontsize=20)
_ = f.add_subplot(2, 2, 1 )
plt.plot(T, E, '-o', color='Blue')
plt.xlabel("Temperature (T)", fontsize=20)
plt.ylabel("Energy ", fontsize=20)
plt.axis('tight')
_ = f.add_subplot(2, 2, 2 )
plt.plot(T, abs(M), '-o', color='Red')
plt.xlabel("Temperature (T)", fontsize=20)
plt.ylabel("Magnetization ", fontsize=20)
plt.axis('tight')
_ = f.add_subplot(2, 2, 3 )
plt.plot(T, C, '-o', color='Green')
plt.xlabel("Temperature (T)", fontsize=20)
plt.ylabel("Specific Heat ", fontsize=20)
plt.axis('tight')
_ = f.add_subplot(2, 2, 4 )
plt.plot(T, X, '-o', color='Black')
plt.xlabel("Temperature (T)", fontsize=20)
plt.ylabel("Susceptibility", fontsize=20)
plt.axis('tight')
plt.show()
if __name__ == '__main__':
main()
Which of course, works:
I have two main questions:
Is there anything left to optimize? I knew ISING model is hard to simulate, but looking at the following table, it seems like I'm missing something...
lattice size : 32x32
burnin = 8 * 10**4
mcSteps = 8 * 10**4
Simulation time = 365.98 seconds
lattice size : 64x64
burnin = 10**5
mcSteps = 10**5
Simulation time = 1869.58 seconds
I tried implementing another optimization based on not calculating the exponential over and over again using a dictionary, yet on my tests, it seems like its slower. What am I doing wrong?
A:
The computation of the exponential is not really an issue. The main issue is that generating random numbers is expensive and a huge number of random values are generated. Another issue is that the current computation is intrinsically sequential.
Indeed, for N=32, mcmove tends to generate about 3000 random values, and this function is called 2 * 80_000 times per iteration. This means, 2 * 80_000 * 3000 = 480_000_000 random number generated per iteration. Assuming generating a random number takes about 5 nanoseconds (ie. only 20 cycles on a 4 GHz CPU), then each iteration will take about 2.5 seconds only to generate all the random numbers. On my 4.5 GHz i5-9600KF CPU, each iteration takes about 2.5-3.0 seconds.
The first thing to do is to try to generate random number using a faster method. The bad news is that this is hard to do in Numba and more generally any-Python-based code. Micro-optimizations using a lower-level language like C or C++ can significantly help to speed up this computation. Such low-level micro-optimizations are not possible in high-level languages/tools like Python, including Numba. Still, one can implement a random-number generator (RNG) specifically designed so to produce the random values you need. xoshiro256** can be used to generate random numbers quickly though it may not be as random as what Numpy/Numba can produce (there is no free lunch). The idea is to generate 64-bit integers and extract range of bits so to produce 2 16-bit integers and a 32-bit floating point value. This RNG should be able to generate 3 values in only about 10 cycles on a modern CPU!
Once this optimization has been applied, the computation of the exponential becomes the new bottleneck. It can be improved using a lookup table (LUT) like you did. However, using a dictionary is slow. You can use a basic array for that. This is much faster. Note the index need to be positive and small. Thus, the minimum cost needs to be added.
Once the previous optimization has been implemented, the new bottleneck is the conditionals if cost < 0 and elif c < .... The conditionals are slow because they are unpredictable (due to the result being random). Indeed, modern CPUs try to predict the outcomes of conditionals so to avoid expensive stalls in the CPU pipeline. This is a complex topic. If you want to know more about this, then please read this great post. In practice, such a problem can be avoided using a branchless computation. This means you need to use binary operators and integer sticks so for the sign of s to change regarding the value of the condition. For example: s *= 1 - ((cost < 0) | (c < lut[cost])) * 2.
Note that modulus are generally expensive unless the compiler know the value at compile time. They are even faster when the value is a power of two because the compiler can use bit tricks so to compute the modulus (more specifically a logical and by a pre-compiled constant). For calcEnergy, a solution is to compute the border separately so to completely avoid the modulus. Furthermore, loops can be faster when the compiler know the number of iterations at compile time (it can unroll the loops and better vectorize them). Moreover, when N is not a power of two, the RNG can be significantly slower and more complex to implement without any bias, so I assume N is a power of two.
Here is the final code:
# [...] Same as in the initial code
@nb.njit(inline="always")
def rol64(x, k):
return (x << k) | (x >> (64 - k))
@nb.njit(inline="always")
def xoshiro256ss_init():
state = np.empty(4, dtype=np.uint64)
maxi = (np.uint64(1) << np.uint64(63)) - np.uint64(1)
for i in range(4):
state[i] = np.random.randint(0, maxi)
return state
@nb.njit(inline="always")
def xoshiro256ss(state):
result = rol64(state[1] * np.uint64(5), np.uint64(7)) * np.uint64(9)
t = state[1] << np.uint64(17)
state[2] ^= state[0]
state[3] ^= state[1]
state[1] ^= state[2]
state[0] ^= state[3]
state[2] ^= t
state[3] = rol64(state[3], np.uint64(45))
return result
@nb.njit(inline="always")
def xoshiro_gen_values(N, state):
'''
Produce 2 integers between 0 and N and a simple-precision floating-point number.
N must be a power of two less than 65536. Otherwise results will be biased (ie. not random).
N should be known at compile time so for this to be fast
'''
rand_bits = xoshiro256ss(state)
a = (rand_bits >> np.uint64(32)) % N
b = (rand_bits >> np.uint64(48)) % N
c = np.uint32(rand_bits) * np.float32(2.3283064370807974e-10)
return (a, b, c)
@nb.njit(nogil=True)
def mcmove_generic(lattice, beta, N):
'''
Monte Carlo move using Metropolis algorithm.
N must be a small power of two and known at compile time
'''
state = xoshiro256ss_init()
lut = np.full(16, np.nan)
for cost in (0, 4, 8, 12, 16):
lut[cost] = np.exp(-cost*beta)
for _ in range(N):
for __ in range(N):
a, b, c = xoshiro_gen_values(N, state)
s = lattice[a, b]
dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N]
cost = 2*s*dE
# Branchless computation of s
tmp = (cost < 0) | (c < lut[cost])
s *= 1 - tmp * 2
lattice[a, b] = s
return lattice
@nb.njit(nogil=True)
def mcmove(lattice, beta, N):
assert N in [16, 32, 64, 128]
if N == 16: return mcmove_generic(lattice, beta, 16)
elif N == 32: return mcmove_generic(lattice, beta, 32)
elif N == 64: return mcmove_generic(lattice, beta, 64)
elif N == 128: return mcmove_generic(lattice, beta, 128)
else: raise Exception('Not implemented')
@nb.njit(nogil=True)
def calcEnergy(lattice, N):
'''
Energy of a given configuration
'''
energy = 0
# Center
for i in range(1, len(lattice)-1):
for j in range(1, len(lattice)-1):
S = lattice[i,j]
nb = lattice[i+1, j] + lattice[i,j+1] + lattice[i-1, j] + lattice[i,j-1]
energy -= nb*S
# Border
for i in (0, len(lattice)-1):
for j in range(1, len(lattice)-1):
S = lattice[i,j]
nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]
energy -= nb*S
for i in range(1, len(lattice)-1):
for j in (0, len(lattice)-1):
S = lattice[i,j]
nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]
energy -= nb*S
return energy/2
@nb.njit(nogil=True)
def calcMag(lattice):
'''
Magnetization of a given configuration
'''
mag = np.sum(lattice, dtype=np.int32)
return mag
# [...] Same as in the initial code
I hope there is no error in the code. It is hard to check results with a different RNG.
The resulting code is significantly faster on my machine: it compute 4 iterations in 5.3 seconds with N=32 as opposed to 24.1 seconds. The computation is thus 4.5 times faster!
It is very hard to optimize the code further using Numba in Python. The computation cannot be efficiently parallelized due to the long dependency chain in mcmove.
A:
Based on the Mr. Richard's excellent answer, I found another optimization. In the ISING_model function, the code can be parallelized because we are doing the same operations independently for every temperature. To achieve this, I simply used parallel = True in the ISING_model nb.jit decorator, and used nb.prange for the temperature loop inside the function, i.e, for temperature in nb.prange(nT).
The resulting code is even faster... On my machine, with the setting of ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4) with N=32, without parallelization, it computes in 93.1621 seconds and with parallelization, it computes in 29.9872 seconds. Another 3 times faster optimization! Which is really cool.
I put the final code here for everyone to use.
from timeit import default_timer as timer
import matplotlib.pyplot as plt
import numba as nb
import numpy as np
@nb.njit(nogil=True)
def initialstate(N):
'''
Generates a random spin configuration for initial condition in compliance with the Numba JIT compiler.
'''
state = np.empty((N,N),dtype=np.int8)
for i in range(N):
for j in range(N):
state[i,j] = 2*np.random.randint(2)-1
return state
@nb.njit(inline="always")
def rol64(x, k):
return (x << k) | (x >> (64 - k))
@nb.njit(inline="always")
def xoshiro256ss_init():
state = np.empty(4, dtype=np.uint64)
maxi = (np.uint64(1) << np.uint64(63)) - np.uint64(1)
for i in range(4):
state[i] = np.random.randint(0, maxi)
return state
@nb.njit(inline="always")
def xoshiro256ss(state):
result = rol64(state[1] * np.uint64(5), np.uint64(7)) * np.uint64(9)
t = state[1] << np.uint64(17)
state[2] ^= state[0]
state[3] ^= state[1]
state[1] ^= state[2]
state[0] ^= state[3]
state[2] ^= t
state[3] = rol64(state[3], np.uint64(45))
return result
@nb.njit(inline="always")
def xoshiro_gen_values(N, state):
'''
Produce 2 integers between 0 and N and a simple-precision floating-point number.
N must be a power of two less than 65536. Otherwise results will be biased (ie. not random).
N should be known at compile time so for this to be fast
'''
rand_bits = xoshiro256ss(state)
a = (rand_bits >> np.uint64(32)) % N
b = (rand_bits >> np.uint64(48)) % N
c = np.uint32(rand_bits) * np.float32(2.3283064370807974e-10)
return (a, b, c)
@nb.njit(nogil=True)
def mcmove_generic(lattice, beta, N):
'''
Monte Carlo move using Metropolis algorithm.
N must be a small power of two and known at compile time
'''
state = xoshiro256ss_init()
lut = np.full(16, np.nan)
for cost in (0, 4, 8, 12, 16):
lut[cost] = np.exp(-cost*beta)
for _ in range(N):
for __ in range(N):
a, b, c = xoshiro_gen_values(N, state)
s = lattice[a, b]
dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N]
cost = 2*s*dE
# Branchless computation of s
tmp = (cost < 0) | (c < lut[cost])
s *= 1 - tmp * 2
lattice[a, b] = s
return lattice
@nb.njit(nogil=True)
def mcmove(lattice, beta, N):
assert N in [16, 32, 64, 128]
if N == 16: return mcmove_generic(lattice, beta, 16)
elif N == 32: return mcmove_generic(lattice, beta, 32)
elif N == 64: return mcmove_generic(lattice, beta, 64)
elif N == 128: return mcmove_generic(lattice, beta, 128)
else: raise Exception('Not implemented')
@nb.njit(nogil=True)
def calcEnergy(lattice, N):
'''
Energy of a given configuration
'''
energy = 0
# Center
for i in range(1, len(lattice)-1):
for j in range(1, len(lattice)-1):
S = lattice[i,j]
nb = lattice[i+1, j] + lattice[i,j+1] + lattice[i-1, j] + lattice[i,j-1]
energy -= nb*S
# Border
for i in (0, len(lattice)-1):
for j in range(1, len(lattice)-1):
S = lattice[i,j]
nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]
energy -= nb*S
for i in range(1, len(lattice)-1):
for j in (0, len(lattice)-1):
S = lattice[i,j]
nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]
energy -= nb*S
return energy/2
@nb.njit(nogil=True)
def calcMag(lattice):
'''
Magnetization of a given configuration
'''
mag = np.sum(lattice, dtype=np.int32)
return mag
@nb.njit(nogil=True, parallel=True)
def ISING_model(nT, N, burnin, mcSteps):
"""
nT : Number of temperature points.
N : Size of the lattice, N x N.
burnin : Number of MC sweeps for equilibration (Burn-in).
mcSteps : Number of MC sweeps for calculation.
"""
T = np.linspace(1.2, 3.8, nT)
E,M,C,X = np.empty(nT, dtype= np.float32), np.empty(nT, dtype= np.float32), np.empty(nT, dtype= np.float32), np.empty(nT, dtype= np.float32)
n1, n2 = 1/(mcSteps*N*N), 1/(mcSteps*mcSteps*N*N)
for temperature in nb.prange(nT):
lattice = initialstate(N) # initialise
E1 = M1 = E2 = M2 = 0
iT = 1/T[temperature]
iT2= iT*iT
for _ in range(burnin): # equilibrate
mcmove(lattice, iT, N) # Monte Carlo moves
for _ in range(mcSteps):
mcmove(lattice, iT, N)
Ene = calcEnergy(lattice, N) # calculate the Energy
Mag = calcMag(lattice) # calculate the Magnetisation
E1 += Ene
M1 += Mag
M2 += Mag*Mag
E2 += Ene*Ene
E[temperature] = n1*E1
M[temperature] = n1*M1
C[temperature] = (n1*E2 - n2*E1*E1)*iT2
X[temperature] = (n1*M2 - n2*M1*M1)*iT
return T,E,M,C,X
def main():
N = 32
start_time = timer()
T,E,M,C,X = ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4)
end_time = timer()
print("Elapsed time: %g seconds" % (end_time - start_time))
f = plt.figure(figsize=(18, 10)); #
# figure title
f.suptitle(f"Ising Model: 2D Lattice\nSize: {N}x{N}", fontsize=20)
_ = f.add_subplot(2, 2, 1 )
plt.plot(T, E, '-o', color='Blue')
plt.xlabel("Temperature (T)", fontsize=20)
plt.ylabel("Energy ", fontsize=20)
plt.axis('tight')
_ = f.add_subplot(2, 2, 2 )
plt.plot(T, abs(M), '-o', color='Red')
plt.xlabel("Temperature (T)", fontsize=20)
plt.ylabel("Magnetization ", fontsize=20)
plt.axis('tight')
_ = f.add_subplot(2, 2, 3 )
plt.plot(T, C, '-o', color='Green')
plt.xlabel("Temperature (T)", fontsize=20)
plt.ylabel("Specific Heat ", fontsize=20)
plt.axis('tight')
_ = f.add_subplot(2, 2, 4 )
plt.plot(T, X, '-o', color='Black')
plt.xlabel("Temperature (T)", fontsize=20)
plt.ylabel("Susceptibility", fontsize=20)
plt.axis('tight')
plt.show()
if __name__ == '__main__':
main()
| Further optimizing the ISING model | I've implemented the 2D ISING model in Python, using NumPy and Numba's JIT:
from timeit import default_timer as timer
import matplotlib.pyplot as plt
import numba as nb
import numpy as np
# TODO for Dict optimization.
# from numba import types
# from numba.typed import Dict
@nb.njit(nogil=True)
def initialstate(N):
'''
Generates a random spin configuration for initial condition
'''
state = np.empty((N,N),dtype=np.int8)
for i in range(N):
for j in range(N):
state[i,j] = 2*np.random.randint(2)-1
return state
@nb.njit(nogil=True)
def mcmove(lattice, beta, N):
'''
Monte Carlo move using Metropolis algorithm
'''
# # TODO* Dict optimization
# dict_param = Dict.empty(
# key_type=types.int64,
# value_type=types.float64,
# )
# dict_param = {cost : np.exp(-cost*beta) for cost in [-8, -4, 0, 4, 8] }
for _ in range(N):
for __ in range(N):
a = np.random.randint(0, N)
b = np.random.randint(0, N)
s = lattice[a, b]
dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N]
cost = 2*s*dE
if cost < 0:
s *= -1
#TODO* elif np.random.rand() < dict_param[cost]:
elif np.random.rand() < np.exp(-cost*beta):
s *= -1
lattice[a, b] = s
return lattice
@nb.njit(nogil=True)
def calcEnergy(lattice, N):
'''
Energy of a given configuration
'''
energy = 0
for i in range(len(lattice)):
for j in range(len(lattice)):
S = lattice[i,j]
nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]
energy += -nb*S
return energy/2
@nb.njit(nogil=True)
def calcMag(lattice):
'''
Magnetization of a given configuration
'''
mag = np.sum(lattice, dtype=np.int32)
return mag
@nb.njit(nogil=True)
def ISING_model(nT, N, burnin, mcSteps):
"""
nT : Number of temperature points.
N : Size of the lattice, N x N.
burnin : Number of MC sweeps for equilibration (Burn-in).
mcSteps : Number of MC sweeps for calculation.
"""
T = np.linspace(1.2, 3.8, nT);
E,M,C,X = np.zeros(nT), np.zeros(nT), np.zeros(nT), np.zeros(nT)
n1, n2 = 1.0/(mcSteps*N*N), 1.0/(mcSteps*mcSteps*N*N)
for temperature in range(nT):
lattice = initialstate(N) # initialise
E1 = M1 = E2 = M2 = 0
iT = 1/T[temperature]
iT2= iT*iT
for _ in range(burnin): # equilibrate
mcmove(lattice, iT, N) # Monte Carlo moves
for _ in range(mcSteps):
mcmove(lattice, iT, N)
Ene = calcEnergy(lattice, N) # calculate the Energy
Mag = calcMag(lattice,) # calculate the Magnetisation
E1 += Ene
M1 += Mag
M2 += Mag*Mag
E2 += Ene*Ene
E[temperature] = n1*E1
M[temperature] = n1*M1
C[temperature] = (n1*E2 - n2*E1*E1)*iT2
X[temperature] = (n1*M2 - n2*M1*M1)*iT
return T,E,M,C,X
def main():
N = 32
start_time = timer()
T,E,M,C,X = ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4)
end_time = timer()
print("Elapsed time: %g seconds" % (end_time - start_time))
f = plt.figure(figsize=(18, 10)); #
# figure title
f.suptitle(f"Ising Model: 2D Lattice\nSize: {N}x{N}", fontsize=20)
_ = f.add_subplot(2, 2, 1 )
plt.plot(T, E, '-o', color='Blue')
plt.xlabel("Temperature (T)", fontsize=20)
plt.ylabel("Energy ", fontsize=20)
plt.axis('tight')
_ = f.add_subplot(2, 2, 2 )
plt.plot(T, abs(M), '-o', color='Red')
plt.xlabel("Temperature (T)", fontsize=20)
plt.ylabel("Magnetization ", fontsize=20)
plt.axis('tight')
_ = f.add_subplot(2, 2, 3 )
plt.plot(T, C, '-o', color='Green')
plt.xlabel("Temperature (T)", fontsize=20)
plt.ylabel("Specific Heat ", fontsize=20)
plt.axis('tight')
_ = f.add_subplot(2, 2, 4 )
plt.plot(T, X, '-o', color='Black')
plt.xlabel("Temperature (T)", fontsize=20)
plt.ylabel("Susceptibility", fontsize=20)
plt.axis('tight')
plt.show()
if __name__ == '__main__':
main()
Which of course, works:
I have two main questions:
Is there anything left to optimize? I knew ISING model is hard to simulate, but looking at the following table, it seems like I'm missing something...
lattice size : 32x32
burnin = 8 * 10**4
mcSteps = 8 * 10**4
Simulation time = 365.98 seconds
lattice size : 64x64
burnin = 10**5
mcSteps = 10**5
Simulation time = 1869.58 seconds
I tried implementing another optimization based on not calculating the exponential over and over again using a dictionary, yet on my tests, it seems like its slower. What am I doing wrong?
| [
"The computation of the exponential is not really an issue. The main issue is that generating random numbers is expensive and a huge number of random values are generated. Another issue is that the current computation is intrinsically sequential.\nIndeed, for N=32, mcmove tends to generate about 3000 random values, and this function is called 2 * 80_000 times per iteration. This means, 2 * 80_000 * 3000 = 480_000_000 random number generated per iteration. Assuming generating a random number takes about 5 nanoseconds (ie. only 20 cycles on a 4 GHz CPU), then each iteration will take about 2.5 seconds only to generate all the random numbers. On my 4.5 GHz i5-9600KF CPU, each iteration takes about 2.5-3.0 seconds.\nThe first thing to do is to try to generate random number using a faster method. The bad news is that this is hard to do in Numba and more generally any-Python-based code. Micro-optimizations using a lower-level language like C or C++ can significantly help to speed up this computation. Such low-level micro-optimizations are not possible in high-level languages/tools like Python, including Numba. Still, one can implement a random-number generator (RNG) specifically designed so to produce the random values you need. xoshiro256** can be used to generate random numbers quickly though it may not be as random as what Numpy/Numba can produce (there is no free lunch). The idea is to generate 64-bit integers and extract range of bits so to produce 2 16-bit integers and a 32-bit floating point value. This RNG should be able to generate 3 values in only about 10 cycles on a modern CPU!\nOnce this optimization has been applied, the computation of the exponential becomes the new bottleneck. It can be improved using a lookup table (LUT) like you did. However, using a dictionary is slow. You can use a basic array for that. This is much faster. Note the index need to be positive and small. Thus, the minimum cost needs to be added.\nOnce the previous optimization has been implemented, the new bottleneck is the conditionals if cost < 0 and elif c < .... The conditionals are slow because they are unpredictable (due to the result being random). Indeed, modern CPUs try to predict the outcomes of conditionals so to avoid expensive stalls in the CPU pipeline. This is a complex topic. If you want to know more about this, then please read this great post. In practice, such a problem can be avoided using a branchless computation. This means you need to use binary operators and integer sticks so for the sign of s to change regarding the value of the condition. For example: s *= 1 - ((cost < 0) | (c < lut[cost])) * 2.\nNote that modulus are generally expensive unless the compiler know the value at compile time. They are even faster when the value is a power of two because the compiler can use bit tricks so to compute the modulus (more specifically a logical and by a pre-compiled constant). For calcEnergy, a solution is to compute the border separately so to completely avoid the modulus. Furthermore, loops can be faster when the compiler know the number of iterations at compile time (it can unroll the loops and better vectorize them). Moreover, when N is not a power of two, the RNG can be significantly slower and more complex to implement without any bias, so I assume N is a power of two.\nHere is the final code:\n# [...] Same as in the initial code\n\n@nb.njit(inline=\"always\")\ndef rol64(x, k):\n return (x << k) | (x >> (64 - k))\n\n@nb.njit(inline=\"always\")\ndef xoshiro256ss_init():\n state = np.empty(4, dtype=np.uint64)\n maxi = (np.uint64(1) << np.uint64(63)) - np.uint64(1)\n for i in range(4):\n state[i] = np.random.randint(0, maxi)\n return state\n\n@nb.njit(inline=\"always\")\ndef xoshiro256ss(state):\n result = rol64(state[1] * np.uint64(5), np.uint64(7)) * np.uint64(9)\n t = state[1] << np.uint64(17)\n state[2] ^= state[0]\n state[3] ^= state[1]\n state[1] ^= state[2]\n state[0] ^= state[3]\n state[2] ^= t\n state[3] = rol64(state[3], np.uint64(45))\n return result\n\n@nb.njit(inline=\"always\")\ndef xoshiro_gen_values(N, state):\n '''\n Produce 2 integers between 0 and N and a simple-precision floating-point number.\n N must be a power of two less than 65536. Otherwise results will be biased (ie. not random).\n N should be known at compile time so for this to be fast\n '''\n rand_bits = xoshiro256ss(state)\n a = (rand_bits >> np.uint64(32)) % N\n b = (rand_bits >> np.uint64(48)) % N\n c = np.uint32(rand_bits) * np.float32(2.3283064370807974e-10)\n return (a, b, c)\n\n@nb.njit(nogil=True)\ndef mcmove_generic(lattice, beta, N):\n '''\n Monte Carlo move using Metropolis algorithm.\n N must be a small power of two and known at compile time\n '''\n\n state = xoshiro256ss_init()\n\n lut = np.full(16, np.nan)\n for cost in (0, 4, 8, 12, 16):\n lut[cost] = np.exp(-cost*beta)\n\n for _ in range(N):\n for __ in range(N):\n a, b, c = xoshiro_gen_values(N, state)\n s = lattice[a, b]\n dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N]\n cost = 2*s*dE\n\n # Branchless computation of s\n tmp = (cost < 0) | (c < lut[cost])\n s *= 1 - tmp * 2\n\n lattice[a, b] = s\n\n return lattice\n\n@nb.njit(nogil=True)\ndef mcmove(lattice, beta, N):\n assert N in [16, 32, 64, 128]\n if N == 16: return mcmove_generic(lattice, beta, 16)\n elif N == 32: return mcmove_generic(lattice, beta, 32)\n elif N == 64: return mcmove_generic(lattice, beta, 64)\n elif N == 128: return mcmove_generic(lattice, beta, 128)\n else: raise Exception('Not implemented')\n\n@nb.njit(nogil=True)\ndef calcEnergy(lattice, N):\n '''\n Energy of a given configuration\n '''\n energy = 0 \n # Center\n for i in range(1, len(lattice)-1):\n for j in range(1, len(lattice)-1):\n S = lattice[i,j]\n nb = lattice[i+1, j] + lattice[i,j+1] + lattice[i-1, j] + lattice[i,j-1]\n energy -= nb*S\n # Border\n for i in (0, len(lattice)-1):\n for j in range(1, len(lattice)-1):\n S = lattice[i,j]\n nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]\n energy -= nb*S\n for i in range(1, len(lattice)-1):\n for j in (0, len(lattice)-1):\n S = lattice[i,j]\n nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]\n energy -= nb*S\n return energy/2\n\n@nb.njit(nogil=True)\ndef calcMag(lattice):\n '''\n Magnetization of a given configuration\n '''\n mag = np.sum(lattice, dtype=np.int32)\n return mag\n\n# [...] Same as in the initial code\n\nI hope there is no error in the code. It is hard to check results with a different RNG.\nThe resulting code is significantly faster on my machine: it compute 4 iterations in 5.3 seconds with N=32 as opposed to 24.1 seconds. The computation is thus 4.5 times faster!\nIt is very hard to optimize the code further using Numba in Python. The computation cannot be efficiently parallelized due to the long dependency chain in mcmove.\n",
"Based on the Mr. Richard's excellent answer, I found another optimization. In the ISING_model function, the code can be parallelized because we are doing the same operations independently for every temperature. To achieve this, I simply used parallel = True in the ISING_model nb.jit decorator, and used nb.prange for the temperature loop inside the function, i.e, for temperature in nb.prange(nT).\nThe resulting code is even faster... On my machine, with the setting of ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4) with N=32, without parallelization, it computes in 93.1621 seconds and with parallelization, it computes in 29.9872 seconds. Another 3 times faster optimization! Which is really cool.\nI put the final code here for everyone to use.\n\nfrom timeit import default_timer as timer\nimport matplotlib.pyplot as plt\nimport numba as nb\nimport numpy as np\n\n@nb.njit(nogil=True)\ndef initialstate(N): \n ''' \n Generates a random spin configuration for initial condition in compliance with the Numba JIT compiler.\n '''\n state = np.empty((N,N),dtype=np.int8)\n for i in range(N):\n for j in range(N):\n state[i,j] = 2*np.random.randint(2)-1\n return state\n\n@nb.njit(inline=\"always\")\ndef rol64(x, k):\n return (x << k) | (x >> (64 - k))\n\n@nb.njit(inline=\"always\")\ndef xoshiro256ss_init():\n state = np.empty(4, dtype=np.uint64)\n maxi = (np.uint64(1) << np.uint64(63)) - np.uint64(1)\n for i in range(4):\n state[i] = np.random.randint(0, maxi)\n return state\n\n@nb.njit(inline=\"always\")\ndef xoshiro256ss(state):\n result = rol64(state[1] * np.uint64(5), np.uint64(7)) * np.uint64(9)\n t = state[1] << np.uint64(17)\n state[2] ^= state[0]\n state[3] ^= state[1]\n state[1] ^= state[2]\n state[0] ^= state[3]\n state[2] ^= t\n state[3] = rol64(state[3], np.uint64(45))\n return result\n\n@nb.njit(inline=\"always\")\ndef xoshiro_gen_values(N, state):\n '''\n Produce 2 integers between 0 and N and a simple-precision floating-point number.\n N must be a power of two less than 65536. Otherwise results will be biased (ie. not random).\n N should be known at compile time so for this to be fast\n '''\n rand_bits = xoshiro256ss(state)\n a = (rand_bits >> np.uint64(32)) % N\n b = (rand_bits >> np.uint64(48)) % N\n c = np.uint32(rand_bits) * np.float32(2.3283064370807974e-10)\n return (a, b, c)\n\n@nb.njit(nogil=True)\ndef mcmove_generic(lattice, beta, N):\n '''\n Monte Carlo move using Metropolis algorithm.\n N must be a small power of two and known at compile time\n '''\n\n state = xoshiro256ss_init()\n\n lut = np.full(16, np.nan)\n for cost in (0, 4, 8, 12, 16):\n lut[cost] = np.exp(-cost*beta)\n\n for _ in range(N):\n for __ in range(N):\n a, b, c = xoshiro_gen_values(N, state)\n s = lattice[a, b]\n dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N]\n cost = 2*s*dE\n\n # Branchless computation of s\n tmp = (cost < 0) | (c < lut[cost])\n s *= 1 - tmp * 2\n\n lattice[a, b] = s\n\n return lattice\n\n@nb.njit(nogil=True)\ndef mcmove(lattice, beta, N):\n assert N in [16, 32, 64, 128]\n if N == 16: return mcmove_generic(lattice, beta, 16)\n elif N == 32: return mcmove_generic(lattice, beta, 32)\n elif N == 64: return mcmove_generic(lattice, beta, 64)\n elif N == 128: return mcmove_generic(lattice, beta, 128)\n else: raise Exception('Not implemented')\n\n@nb.njit(nogil=True)\ndef calcEnergy(lattice, N):\n '''\n Energy of a given configuration\n '''\n energy = 0 \n # Center\n for i in range(1, len(lattice)-1):\n for j in range(1, len(lattice)-1):\n S = lattice[i,j]\n nb = lattice[i+1, j] + lattice[i,j+1] + lattice[i-1, j] + lattice[i,j-1]\n energy -= nb*S\n # Border\n for i in (0, len(lattice)-1):\n for j in range(1, len(lattice)-1):\n S = lattice[i,j]\n nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]\n energy -= nb*S\n for i in range(1, len(lattice)-1):\n for j in (0, len(lattice)-1):\n S = lattice[i,j]\n nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N]\n energy -= nb*S\n return energy/2\n\n@nb.njit(nogil=True)\ndef calcMag(lattice):\n '''\n Magnetization of a given configuration\n '''\n mag = np.sum(lattice, dtype=np.int32)\n return mag\n\n@nb.njit(nogil=True, parallel=True)\ndef ISING_model(nT, N, burnin, mcSteps):\n\n \"\"\" \n nT : Number of temperature points.\n N : Size of the lattice, N x N.\n burnin : Number of MC sweeps for equilibration (Burn-in).\n mcSteps : Number of MC sweeps for calculation.\n\n \"\"\"\n\n\n T = np.linspace(1.2, 3.8, nT)\n E,M,C,X = np.empty(nT, dtype= np.float32), np.empty(nT, dtype= np.float32), np.empty(nT, dtype= np.float32), np.empty(nT, dtype= np.float32)\n n1, n2 = 1/(mcSteps*N*N), 1/(mcSteps*mcSteps*N*N) \n\n\n for temperature in nb.prange(nT):\n lattice = initialstate(N) # initialise\n\n E1 = M1 = E2 = M2 = 0\n iT = 1/T[temperature]\n iT2= iT*iT\n \n for _ in range(burnin): # equilibrate\n mcmove(lattice, iT, N) # Monte Carlo moves\n\n for _ in range(mcSteps):\n mcmove(lattice, iT, N) \n Ene = calcEnergy(lattice, N) # calculate the Energy\n Mag = calcMag(lattice) # calculate the Magnetisation\n E1 += Ene\n M1 += Mag\n M2 += Mag*Mag \n E2 += Ene*Ene\n\n E[temperature] = n1*E1\n M[temperature] = n1*M1\n C[temperature] = (n1*E2 - n2*E1*E1)*iT2\n X[temperature] = (n1*M2 - n2*M1*M1)*iT\n\n return T,E,M,C,X\n\n\ndef main():\n \n N = 32\n start_time = timer()\n T,E,M,C,X = ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4)\n end_time = timer()\n\n print(\"Elapsed time: %g seconds\" % (end_time - start_time))\n\n f = plt.figure(figsize=(18, 10)); # \n\n # figure title\n f.suptitle(f\"Ising Model: 2D Lattice\\nSize: {N}x{N}\", fontsize=20)\n\n _ = f.add_subplot(2, 2, 1 )\n plt.plot(T, E, '-o', color='Blue') \n plt.xlabel(\"Temperature (T)\", fontsize=20)\n plt.ylabel(\"Energy \", fontsize=20)\n plt.axis('tight')\n\n\n _ = f.add_subplot(2, 2, 2 )\n plt.plot(T, abs(M), '-o', color='Red')\n plt.xlabel(\"Temperature (T)\", fontsize=20)\n plt.ylabel(\"Magnetization \", fontsize=20)\n plt.axis('tight')\n\n\n _ = f.add_subplot(2, 2, 3 )\n plt.plot(T, C, '-o', color='Green')\n plt.xlabel(\"Temperature (T)\", fontsize=20)\n plt.ylabel(\"Specific Heat \", fontsize=20)\n plt.axis('tight')\n\n\n _ = f.add_subplot(2, 2, 4 )\n plt.plot(T, X, '-o', color='Black')\n plt.xlabel(\"Temperature (T)\", fontsize=20)\n plt.ylabel(\"Susceptibility\", fontsize=20)\n plt.axis('tight')\n\n\n plt.show()\n\nif __name__ == '__main__':\n main()\n\n\n"
] | [
1,
1
] | [] | [] | [
"montecarlo",
"numba",
"numpy",
"performance",
"python"
] | stackoverflow_0074660595_montecarlo_numba_numpy_performance_python.txt |
Q:
How to get data from django orm inside an asynchronous function?
I need to retrieve data from the database inside an asynchronous function. If I retrieve only one object by executing e.g:
users = await sync_to_async(Creators.objects.first)()
everything works as it should. But if the response contains multiple objects, I get an error.
@sync_to_async
def get_creators():
return Creators.objects.all()
async def CreateBotAll():
users = await get_creators()
for user in users:
print(user)
Tracing:
Traceback (most recent call last):
File "/home/django/django_venv/lib/python3.8/site-
packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/home/django/django_venv/lib/python3.8/site-packages/django/core/handlers/base.py", line
181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/django/django_venv/src/reseller/views.py", line 29, in test
asyncio.run(TgAssistant.CreateBotAll())
File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/django/django_venv/src/reseller/TgAssistant.py", line 84, in CreateBotAll
for user in users:
File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/query.py", line
280, in __iter__
self._fetch_all()
File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/query.py", line
1324, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/query.py", line
51, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/sql/compiler.py",
line 1173, in execute_sql
cursor = self.connection.cursor()
File "/home/django/django_venv/lib/python3.8/site-packages/django/utils/asyncio.py", line 31,
in inner
raise SynchronousOnlyOperation(message)
django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context -
use a thread or sync_to_async.
I made it work that way:
@sync_to_async
def get_creators():
sql = Creators.objects.all()
x = [creator for creator in sql]
return x
Isn't there a more elegant solution?
A:
You may try wrap get_creators response into list:
@sync_to_async
def get_creators():
return list(Creators.objects.all())
A:
Since Django 4.1 you can do the following:
async for creator in Creators.objects.all():
print(creator)
And you can replace this with filter and the like as long as the expression doesn't cause the query to be evaluated.
There are also async versions of get, delete etc prefixed with an 'a' so your
users = await sync_to_async(Creators.objects.first)()
can be replaced with:
user = await Creators.object.afirst()
| How to get data from django orm inside an asynchronous function? | I need to retrieve data from the database inside an asynchronous function. If I retrieve only one object by executing e.g:
users = await sync_to_async(Creators.objects.first)()
everything works as it should. But if the response contains multiple objects, I get an error.
@sync_to_async
def get_creators():
return Creators.objects.all()
async def CreateBotAll():
users = await get_creators()
for user in users:
print(user)
Tracing:
Traceback (most recent call last):
File "/home/django/django_venv/lib/python3.8/site-
packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/home/django/django_venv/lib/python3.8/site-packages/django/core/handlers/base.py", line
181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/django/django_venv/src/reseller/views.py", line 29, in test
asyncio.run(TgAssistant.CreateBotAll())
File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/django/django_venv/src/reseller/TgAssistant.py", line 84, in CreateBotAll
for user in users:
File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/query.py", line
280, in __iter__
self._fetch_all()
File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/query.py", line
1324, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/query.py", line
51, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/home/django/django_venv/lib/python3.8/site-packages/django/db/models/sql/compiler.py",
line 1173, in execute_sql
cursor = self.connection.cursor()
File "/home/django/django_venv/lib/python3.8/site-packages/django/utils/asyncio.py", line 31,
in inner
raise SynchronousOnlyOperation(message)
django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context -
use a thread or sync_to_async.
I made it work that way:
@sync_to_async
def get_creators():
sql = Creators.objects.all()
x = [creator for creator in sql]
return x
Isn't there a more elegant solution?
| [
"You may try wrap get_creators response into list:\n@sync_to_async\ndef get_creators():\n return list(Creators.objects.all())\n\n",
"Since Django 4.1 you can do the following:\nasync for creator in Creators.objects.all():\n print(creator)\n\nAnd you can replace this with filter and the like as long as the expression doesn't cause the query to be evaluated.\nThere are also async versions of get, delete etc prefixed with an 'a' so your\nusers = await sync_to_async(Creators.objects.first)()\n\ncan be replaced with:\nuser = await Creators.object.afirst()\n\n"
] | [
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0071489479_django_python.txt |
Q:
Kernel died with rasterio.open().read() on geo tiff images
When I tried to open and read a certain geo tiff image with gdal and rasterio, I can read and do things like img.meta and img.descriptions. But when I tried to do img.read() with rasterio or img.GetRasterBand(1).ReadAsArray() with gdal the kernel always died after a certain runtime. It's not happening to all geo tiff images but some. Could anyone help me? Thanks!
Python version: 3.9
System: Mac Big Sur Version 11.3.1
Raster information:
File size: 400 MB
Band number: 3
Coordinate reference system: EPSG:26917
Metadata: {'driver': 'GTiff', 'dtype': 'uint8', 'nodata': 255.0, 'width': 580655, 'height': 444631, 'count': 3, 'crs': CRS.from_epsg(26917), 'transform': Affine(0.08000000000000004, 0.0, 607455.9245999996,
0.0, -0.080000000000001, 4859850.802499999)}
Raster description: (None, None, None)
Geotransform : | 0.08, 0.00, 607455.92|
| 0.00,-0.08, 4859850.80|
| 0.00, 0.00, 1.00|
# with rasterio
img = rasterio.open('certain_tiff_file.tif')
metadata = img.meta
print('Metadata: {metadata}\n'.format(metadata=metadata))
# kernel died if run the line below
full_img = img.read()
# with gdal
img = gdal.Open('certain_tiff_file.tif')
img_band1 = img.GetRasterBand(1)
img_band2 = img.GetRasterBand(2)
img_band3 = img.GetRasterBand(3)
# kernel died if run the line below
array = img_band1.ReadAsArray()
A:
I had the same problem when reading large .tiff files.
Following what @Val said in the comments I checked for how much free RAM memory I had as described here:
import psutil
psutil.virtual_memory()
And indeed my issue was that I was running out of RAM. You may try to use del arr once you're done with some array and you don't need to use that anymore to clean a bit of memory. Might be worth looking into gc.collect as well.
| Kernel died with rasterio.open().read() on geo tiff images | When I tried to open and read a certain geo tiff image with gdal and rasterio, I can read and do things like img.meta and img.descriptions. But when I tried to do img.read() with rasterio or img.GetRasterBand(1).ReadAsArray() with gdal the kernel always died after a certain runtime. It's not happening to all geo tiff images but some. Could anyone help me? Thanks!
Python version: 3.9
System: Mac Big Sur Version 11.3.1
Raster information:
File size: 400 MB
Band number: 3
Coordinate reference system: EPSG:26917
Metadata: {'driver': 'GTiff', 'dtype': 'uint8', 'nodata': 255.0, 'width': 580655, 'height': 444631, 'count': 3, 'crs': CRS.from_epsg(26917), 'transform': Affine(0.08000000000000004, 0.0, 607455.9245999996,
0.0, -0.080000000000001, 4859850.802499999)}
Raster description: (None, None, None)
Geotransform : | 0.08, 0.00, 607455.92|
| 0.00,-0.08, 4859850.80|
| 0.00, 0.00, 1.00|
# with rasterio
img = rasterio.open('certain_tiff_file.tif')
metadata = img.meta
print('Metadata: {metadata}\n'.format(metadata=metadata))
# kernel died if run the line below
full_img = img.read()
# with gdal
img = gdal.Open('certain_tiff_file.tif')
img_band1 = img.GetRasterBand(1)
img_band2 = img.GetRasterBand(2)
img_band3 = img.GetRasterBand(3)
# kernel died if run the line below
array = img_band1.ReadAsArray()
| [
"I had the same problem when reading large .tiff files.\nFollowing what @Val said in the comments I checked for how much free RAM memory I had as described here:\nimport psutil\npsutil.virtual_memory()\n\nAnd indeed my issue was that I was running out of RAM. You may try to use del arr once you're done with some array and you don't need to use that anymore to clean a bit of memory. Might be worth looking into gc.collect as well.\n"
] | [
0
] | [] | [] | [
"gdal",
"gis",
"kernel",
"python",
"rasterio"
] | stackoverflow_0068667679_gdal_gis_kernel_python_rasterio.txt |
Q:
Tools for automating windows applications (preferably in Python)?
I have a legacy Windows application which performs a critical business function. It has no API or official support for automation. This program requires a human to perform a sequence of actions in order to convert files in a particular input format into a PDF, from which we can scrape content and then process the data normally.
The business literally cannot function without some calculations/reports that this software performs, but unfortunately, those calculations are poorly understood and we don't have the kind of R&D budget that would allow us to re-implement the software.
The software reads in a proprietary file format and generates a number of PDF reports in an industry-approved format, from which we can scrape the images and deal with them in a more conventional way.
It has been proposed that we wrap up the application inside some kind of API, where I might submit some input data into a queue, and somewhere deep within this, we automate the software as if a human user was driving it to perform the operations.
Unfortunately, the operations are complex and depend on a number of inputs, and also the content of the file being processed. It's not the kind of thing that we could do with a simple macro - some logic will be required to model the behavior of a trained human operator.
So are there any solutions to this? We'd like to be able to drive the software as quickly as possible, and since we have many Python developers it makes sense to implement as much as possible in Python. The outer layers of this system will also be in Python, so that could cut out the complexity. Are there any tools which already provide the bulk of this kind of behavior?
A:
You have multiple options:
1. winshell: A light wrapper around the Windows shell functionality
2. Automa: Utilty to automate repetitive and/or complex task
3: PyAutoGUI is a Python module for programmatically controlling the
mouse and keyboard.
4. Sikuli automates anything you see on the screen http://www.sikuli.org/
5. pure Python scripting. example below:
import os os.system('notepad.exe')
import win32api
win32api.WinExec('notepad.exe')
import subprocess
subprocess.Popen(['notepad.exe'])
A:
The easiest approach to automating an application is to send keystrokes to it. If you can drive the target application by keystrokes alone, operating it becomes manageable without needing to fight screen resolutions, large fonts and mouse positions. [1]
The harder part is recognizing the displayed state of the application. Ideally, you can read the content of the controls using Python [2], to at least detect error conditions and reset the program to a known good state. If resetting the program by conventional navigation fails, consider killing the target process and relaunch the process.
[1] How to send simulated keyboard strokes to the active window using SendKeys
[2] Problem when getting the content of a listbox with python and ctypes on win32
A:
Try out Robotic automation tools, which can mimic or record human interactions with computer and repeat over time. It can be made for handling more complex tasks using scripts depends on that software. Example selecting different inputs, browser components and also windows application.
A:
Below is an example code using pywinauto. From my experience this solves a lot of issues, when we use any other tool, especially in the case of CI/CD.
from pywinauto.application import Application
def open_app(file_path = "notepad.exe"):
app = Application().start(file_path)
return app
def select_menu(app_object = app.UntitledNotepad, menu_item = "Help->About Notepad"):
app_object.menu_select(menu_item)
def click_item(app_object = app.AboutNotepad.OK):
app_object.click()
def type_in(app_object = app.UntitledNotepad.Edit., data = "pywinauto Works!"):
app_object.type_keys(data, with_spaces = True)
| Tools for automating windows applications (preferably in Python)? | I have a legacy Windows application which performs a critical business function. It has no API or official support for automation. This program requires a human to perform a sequence of actions in order to convert files in a particular input format into a PDF, from which we can scrape content and then process the data normally.
The business literally cannot function without some calculations/reports that this software performs, but unfortunately, those calculations are poorly understood and we don't have the kind of R&D budget that would allow us to re-implement the software.
The software reads in a proprietary file format and generates a number of PDF reports in an industry-approved format, from which we can scrape the images and deal with them in a more conventional way.
It has been proposed that we wrap up the application inside some kind of API, where I might submit some input data into a queue, and somewhere deep within this, we automate the software as if a human user was driving it to perform the operations.
Unfortunately, the operations are complex and depend on a number of inputs, and also the content of the file being processed. It's not the kind of thing that we could do with a simple macro - some logic will be required to model the behavior of a trained human operator.
So are there any solutions to this? We'd like to be able to drive the software as quickly as possible, and since we have many Python developers it makes sense to implement as much as possible in Python. The outer layers of this system will also be in Python, so that could cut out the complexity. Are there any tools which already provide the bulk of this kind of behavior?
| [
"You have multiple options:\n1. winshell: A light wrapper around the Windows shell functionality\n2. Automa: Utilty to automate repetitive and/or complex task \n3: PyAutoGUI is a Python module for programmatically controlling the\nmouse and keyboard.\n4. Sikuli automates anything you see on the screen http://www.sikuli.org/\n5. pure Python scripting. example below: \n\n\nimport os os.system('notepad.exe')\nimport win32api\nwin32api.WinExec('notepad.exe')\n\nimport subprocess\nsubprocess.Popen(['notepad.exe'])\n\n",
"The easiest approach to automating an application is to send keystrokes to it. If you can drive the target application by keystrokes alone, operating it becomes manageable without needing to fight screen resolutions, large fonts and mouse positions. [1]\nThe harder part is recognizing the displayed state of the application. Ideally, you can read the content of the controls using Python [2], to at least detect error conditions and reset the program to a known good state. If resetting the program by conventional navigation fails, consider killing the target process and relaunch the process.\n[1] How to send simulated keyboard strokes to the active window using SendKeys\n[2] Problem when getting the content of a listbox with python and ctypes on win32\n",
"Try out Robotic automation tools, which can mimic or record human interactions with computer and repeat over time. It can be made for handling more complex tasks using scripts depends on that software. Example selecting different inputs, browser components and also windows application. \n",
"Below is an example code using pywinauto. From my experience this solves a lot of issues, when we use any other tool, especially in the case of CI/CD.\nfrom pywinauto.application import Application\n\ndef open_app(file_path = \"notepad.exe\"):\n app = Application().start(file_path)\n return app\n\ndef select_menu(app_object = app.UntitledNotepad, menu_item = \"Help->About Notepad\"):\n app_object.menu_select(menu_item)\n\ndef click_item(app_object = app.AboutNotepad.OK):\n app_object.click()\n \ndef type_in(app_object = app.UntitledNotepad.Edit., data = \"pywinauto Works!\"):\n app_object.type_keys(data, with_spaces = True)\n\n"
] | [
3,
3,
1,
0
] | [] | [] | [
"automation",
"python",
"windows"
] | stackoverflow_0052832504_automation_python_windows.txt |
Q:
TypeError: is_legacy_optimizer is not a valid argument, kwargs should be empty for `optimizer_experimental.Optimizer`
Please help me
Code:
from tensorflow.keras.models import load_model
model_path = "model/classifier.h5"
model = load_model(model_path)
Error:
TypeError: is_legacy_optimizer is not a valid argument, kwargs should be empty for optimizer_experimental.Optimizer.
A:
It's possible that the model was trained using a different version of tf.
Try confirming if the environment where you trained and are loading have the same tf version.
| TypeError: is_legacy_optimizer is not a valid argument, kwargs should be empty for `optimizer_experimental.Optimizer` | Please help me
Code:
from tensorflow.keras.models import load_model
model_path = "model/classifier.h5"
model = load_model(model_path)
Error:
TypeError: is_legacy_optimizer is not a valid argument, kwargs should be empty for optimizer_experimental.Optimizer.
| [
"It's possible that the model was trained using a different version of tf.\nTry confirming if the environment where you trained and are loading have the same tf version.\n"
] | [
0
] | [] | [] | [
"keras",
"python",
"tensorflow"
] | stackoverflow_0073538911_keras_python_tensorflow.txt |
Q:
Checking the type of variable in Jinja2
I want to check the type of variable in Jinja2. If it is type of variable is dictionary then I have to print some text in the paragraph and if it's not dict then I have to print some other values.
What I tried here is
{% if {{result}} is dict %}
<tr>
<td>
<p> The details are not here </p>
</td>
</tr>
{% else %}
{% for each_value in result %}
<tr>
<td>each_value.student_name</td>
</tr>
{% endfor %}
{% endif %}
The result I get is two different ways one is of dict type
I.result={'student_name':'a','student_id':1,'student_email':'my_name@gmail.com'}
the another format of result is
II.result=[{'student_name':'b','student_id':2,'student_email':'my_nameb@gmail.com','time':[{'st':1,'et':2},{'st':3,'et':4}]}]
Expected result
If I get the format 'I' then the if loop should get execute.
If I get the format 'II' then the else loop should get execute.
Actual result
jinja2.exceptions.TemplateSyntaxError: expected token ':', got '}'
A:
You should replace {% if {{result}} is dict %} with {% if result is mapping %}.
Reference
A:
Alternative, and possibly better solutions:
{% if result.__class__.__name__ == "dict" %}
or add isinstance to Jinja context, and then
{% if isinstance(result, dict) %}
| Checking the type of variable in Jinja2 | I want to check the type of variable in Jinja2. If it is type of variable is dictionary then I have to print some text in the paragraph and if it's not dict then I have to print some other values.
What I tried here is
{% if {{result}} is dict %}
<tr>
<td>
<p> The details are not here </p>
</td>
</tr>
{% else %}
{% for each_value in result %}
<tr>
<td>each_value.student_name</td>
</tr>
{% endfor %}
{% endif %}
The result I get is two different ways one is of dict type
I.result={'student_name':'a','student_id':1,'student_email':'my_name@gmail.com'}
the another format of result is
II.result=[{'student_name':'b','student_id':2,'student_email':'my_nameb@gmail.com','time':[{'st':1,'et':2},{'st':3,'et':4}]}]
Expected result
If I get the format 'I' then the if loop should get execute.
If I get the format 'II' then the else loop should get execute.
Actual result
jinja2.exceptions.TemplateSyntaxError: expected token ':', got '}'
| [
"You should replace {% if {{result}} is dict %} with {% if result is mapping %}.\nReference\n",
"Alternative, and possibly better solutions:\n{% if result.__class__.__name__ == \"dict\" %}\nor add isinstance to Jinja context, and then\n{% if isinstance(result, dict) %}\n"
] | [
1,
0
] | [] | [] | [
"jinja2",
"python"
] | stackoverflow_0058264079_jinja2_python.txt |
Q:
I'm doing some basis conditional exercises, and I don't know what the % numbers mean in this code
currentYear = int(input('Enter the year: '))
month = int(input('Enter the month: '))
if ((currentYear % 4) == 0 and (currentYear % 100) != 0 or (currentYear % 400) ==0):
print('Leap Year')
I have no idea what the % numbers in the brackets with the currentYear means. I gather it has something to do with leap years, but how does it become %4, %100 or %400?
I don't know what this is all about to be honest...
A:
The % symbol in Python is called the Modulo Operator. It returns the remainder of dividing the left hand operand by right hand operand. It's used to get the remainder of a division problem.
So 100 % 5 == 0
or
100 % 3 == 1 ---> Remainder equals 1
| I'm doing some basis conditional exercises, and I don't know what the % numbers mean in this code | currentYear = int(input('Enter the year: '))
month = int(input('Enter the month: '))
if ((currentYear % 4) == 0 and (currentYear % 100) != 0 or (currentYear % 400) ==0):
print('Leap Year')
I have no idea what the % numbers in the brackets with the currentYear means. I gather it has something to do with leap years, but how does it become %4, %100 or %400?
I don't know what this is all about to be honest...
| [
"The % symbol in Python is called the Modulo Operator. It returns the remainder of dividing the left hand operand by right hand operand. It's used to get the remainder of a division problem.\nSo 100 % 5 == 0\nor\n100 % 3 == 1 ---> Remainder equals 1\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074677867_python.txt |
Q:
Render markdown files contain mermaid diagrams to a combined PDF file using mkdocs
Currently, I'm using mkdocs-materialto use mermaid diagrams, configured as follows (in mkdocs.yml):
...
markdown_extensions:
- pymdownx.superfences:
custom_fences:
- name: mermaid
class: mermaid
...
However, I encounter troubles with PDF exporting.
I have tried several plugins. Most of them depend on Weasy Print and have problems with javascript parts or mermaid diagrams (didn't render and still in code block's style). There is one plugin (mkdocs-pdf-with-js-plugin) that prints pages in an easy and simple way which uses browser to do the job. However, it doesn't contain the combined feature (combine all pages into a single PDF file) that I need as mkdocs-pdf-export-plugin package.
Is there any other plugins that support exporting PDF with mermaid diagrams and combine feature?
A:
My current workaround
Run: ENABLE_PDF_EXPORT=1 mkdocs build. Each markdown file will be exported to a PDF file.
Then, I will define the order of all PDFs when merging into one unique file by putting the PDF name from top to bottom:
In chapters.txt:
A.pdf
B.pdf
C.pdf
...
Then run the following script. Remember that this script is just a hint of what I have done, it has not been completed yet and has not run "as is".
# ================================================================================================
# Move all pdfs from "site" (the output dir of pdf exporting) to the scripts/pdf_export/pdfs
# ================================================================================================
find site -name "*.pdf" -exec mv {} scripts/pdf_export/pdfs \;
cd scripts/pdf_export/pdfs
# ================================================================================================
# Merge all pdfs into one single pdf file wrt the file name's order in chapters.txt
# ================================================================================================
# REMEMBER to put the chapters.txt into scripts/pdf_export/pdfs.
# Install: https://www.pdflabs.com/tools/pdftk-server/
# Install for M1 only: https://stackoverflow.com/a/60889993/6563277 to avoid the "pdftk: Bad CPU type in executable" on Mac
pdftk $(cat chapters.txt) cat output book.pdf
# ================================================================================================
# Add page numbers
# ================================================================================================
# Count pages https://stackoverflow.com/a/27132157/6563277
pageCount=$(pdftk book.pdf dump_data | grep "NumberOfPages" | cut -d":" -f2)
# Turn back to scripts/pdf_export
cd ..
# https://stackoverflow.com/a/30416992/6563277
# Create an overlay pdf file containing only page numbers
gs -o pagenumbers.pdf \
-sDEVICE=pdfwrite \
-g5950x8420 \
-c "/Helvetica findfont \
12 scalefont setfont \
1 1 ${pageCount} { \
/PageNo exch def \
450 20 moveto \
(Page ) show \
PageNo 3 string cvs \
show \
( of ${pageCount}) show \
showpage \
} for"
# Blend pagenumbers.pdf with the original pdf file
pdftk pdfs/book.pdf \
multistamp pagenumbers.pdf \
output final_book.pdf
However, we need other customization like table of contents, book cover, and author section, ... All the above steps are just merging and adding page nums! Lots of things to do.
| Render markdown files contain mermaid diagrams to a combined PDF file using mkdocs | Currently, I'm using mkdocs-materialto use mermaid diagrams, configured as follows (in mkdocs.yml):
...
markdown_extensions:
- pymdownx.superfences:
custom_fences:
- name: mermaid
class: mermaid
...
However, I encounter troubles with PDF exporting.
I have tried several plugins. Most of them depend on Weasy Print and have problems with javascript parts or mermaid diagrams (didn't render and still in code block's style). There is one plugin (mkdocs-pdf-with-js-plugin) that prints pages in an easy and simple way which uses browser to do the job. However, it doesn't contain the combined feature (combine all pages into a single PDF file) that I need as mkdocs-pdf-export-plugin package.
Is there any other plugins that support exporting PDF with mermaid diagrams and combine feature?
| [
"My current workaround\nRun: ENABLE_PDF_EXPORT=1 mkdocs build. Each markdown file will be exported to a PDF file.\nThen, I will define the order of all PDFs when merging into one unique file by putting the PDF name from top to bottom:\nIn chapters.txt:\nA.pdf\nB.pdf\nC.pdf\n...\n\nThen run the following script. Remember that this script is just a hint of what I have done, it has not been completed yet and has not run \"as is\".\n# ================================================================================================\n# Move all pdfs from \"site\" (the output dir of pdf exporting) to the scripts/pdf_export/pdfs\n# ================================================================================================\nfind site -name \"*.pdf\" -exec mv {} scripts/pdf_export/pdfs \\;\n\ncd scripts/pdf_export/pdfs\n\n# ================================================================================================\n# Merge all pdfs into one single pdf file wrt the file name's order in chapters.txt\n# ================================================================================================\n# REMEMBER to put the chapters.txt into scripts/pdf_export/pdfs.\n# Install: https://www.pdflabs.com/tools/pdftk-server/\n# Install for M1 only: https://stackoverflow.com/a/60889993/6563277 to avoid the \"pdftk: Bad CPU type in executable\" on Mac\npdftk $(cat chapters.txt) cat output book.pdf\n\n# ================================================================================================\n# Add page numbers\n# ================================================================================================\n# Count pages https://stackoverflow.com/a/27132157/6563277\npageCount=$(pdftk book.pdf dump_data | grep \"NumberOfPages\" | cut -d\":\" -f2)\n\n# Turn back to scripts/pdf_export\ncd ..\n\n# https://stackoverflow.com/a/30416992/6563277\n# Create an overlay pdf file containing only page numbers\ngs -o pagenumbers.pdf \\\n -sDEVICE=pdfwrite \\\n -g5950x8420 \\\n -c \"/Helvetica findfont \\\n 12 scalefont setfont \\\n 1 1 ${pageCount} { \\\n /PageNo exch def \\\n 450 20 moveto \\\n (Page ) show \\\n PageNo 3 string cvs \\\n show \\\n ( of ${pageCount}) show \\\n showpage \\\n } for\"\n\n# Blend pagenumbers.pdf with the original pdf file\npdftk pdfs/book.pdf \\\n multistamp pagenumbers.pdf \\\n output final_book.pdf\n\nHowever, we need other customization like table of contents, book cover, and author section, ... All the above steps are just merging and adding page nums! Lots of things to do.\n"
] | [
0
] | [] | [] | [
"mermaid",
"mkdocs",
"pdf",
"python"
] | stackoverflow_0074602739_mermaid_mkdocs_pdf_python.txt |
Q:
How to drop rows of a column having float datatype and are values less than 1
I am new to pandas and I have just started to learn how to analyze a data.
In order to explain y problem,
Consider this table as df.csv
Name
Age
Height
A
2
5.7
B
4
5.4
C
8
5.9
D
4
0.6
From this file, I want to drop the row that has Height less than 1 so that when i pass this command, it would delete the specified row and show me this:
Name
Age
Height
A
2
5.7
B
4
5.4
C
8
5.9
I wrote this command:
dec = df[df['Height']<0.0].index
df.drop(dec,inplace=true)
df
but it is showing me this:
Name
Age
Height
A
2
5.7
B
4
5.4
C
8
5.9
D
4
0.6
instead of :
Name
Age
Height
A
2
5.7
B
4
5.4
C
8
5.9
is there a way to achieve this?
A:
dec = df[df['Height'] < 1.0].index
df.drop(dec, inplace=True)
True and False are written in capital letters and the check is needed for 1 and not for 0.
| How to drop rows of a column having float datatype and are values less than 1 | I am new to pandas and I have just started to learn how to analyze a data.
In order to explain y problem,
Consider this table as df.csv
Name
Age
Height
A
2
5.7
B
4
5.4
C
8
5.9
D
4
0.6
From this file, I want to drop the row that has Height less than 1 so that when i pass this command, it would delete the specified row and show me this:
Name
Age
Height
A
2
5.7
B
4
5.4
C
8
5.9
I wrote this command:
dec = df[df['Height']<0.0].index
df.drop(dec,inplace=true)
df
but it is showing me this:
Name
Age
Height
A
2
5.7
B
4
5.4
C
8
5.9
D
4
0.6
instead of :
Name
Age
Height
A
2
5.7
B
4
5.4
C
8
5.9
is there a way to achieve this?
| [
"dec = df[df['Height'] < 1.0].index\ndf.drop(dec, inplace=True)\n\nTrue and False are written in capital letters and the check is needed for 1 and not for 0.\n"
] | [
0
] | [] | [] | [
"csv",
"dataframe",
"pandas",
"python"
] | stackoverflow_0074664750_csv_dataframe_pandas_python.txt |
Q:
Avro, Hive or HBASE - What to use for 10 mio. records daily?
I have the following requirements: i need to process per day around 20.000 elements (lets call them baskets) which generate each between 100 and 1.000 records (lets call them products in basket). A single record has about 10 columns, each row has about 500B - 1KB size (in total).
That means, that i produce around 5 to max. 20 Mio. records per day.
From analytical perspective i need to do some sum up, filtering, especially show trends over multiple days etc.
The solution is Python based and i am able to use anything Hadoop, Microsoft SQL Server, Google Big Query etc. I am reading through lots of articles about Avro, Parquet, Hive, HBASE, etc.
I tested in the first something small with SQL Server and two tables (one for the main elements and the other one the produced items over all days). But with this, the database get very fast quite large + it is not that fast when trying to acess, filter, etc.
So i thought about using Avro and creating per day a single Avro file with the corresponding items. And when i want to analyse them, read them with Python or multiple of them, when i need to analyse multiple of them.
When i think about this, this could be way to large (30 days files with each 10 mio. records) ...
There must be something else. Then i came aroung HIVE and HBASE. But now i am totally confused.
Anyone out there who can sort things in the right manner? What is the easiest or most general way to handle this kind of data?
A:
If you want to analyze data based on columns and aggregates, ORC or Parquet are better. If you don't plan on managing Hadoop infrastructure, then Hive or HBase wouldn't be acceptable. I agree a SQL Server might struggle with large queries... Out of the options listed, that narrows it down to BigQuery.
If you want to explore alternative solutions in the same space, Apache Pinot or Druid support analytical use cases.
Otherwise, throw files (as parquet or ORC) into GCS and use pyspark
| Avro, Hive or HBASE - What to use for 10 mio. records daily? | I have the following requirements: i need to process per day around 20.000 elements (lets call them baskets) which generate each between 100 and 1.000 records (lets call them products in basket). A single record has about 10 columns, each row has about 500B - 1KB size (in total).
That means, that i produce around 5 to max. 20 Mio. records per day.
From analytical perspective i need to do some sum up, filtering, especially show trends over multiple days etc.
The solution is Python based and i am able to use anything Hadoop, Microsoft SQL Server, Google Big Query etc. I am reading through lots of articles about Avro, Parquet, Hive, HBASE, etc.
I tested in the first something small with SQL Server and two tables (one for the main elements and the other one the produced items over all days). But with this, the database get very fast quite large + it is not that fast when trying to acess, filter, etc.
So i thought about using Avro and creating per day a single Avro file with the corresponding items. And when i want to analyse them, read them with Python or multiple of them, when i need to analyse multiple of them.
When i think about this, this could be way to large (30 days files with each 10 mio. records) ...
There must be something else. Then i came aroung HIVE and HBASE. But now i am totally confused.
Anyone out there who can sort things in the right manner? What is the easiest or most general way to handle this kind of data?
| [
"If you want to analyze data based on columns and aggregates, ORC or Parquet are better. If you don't plan on managing Hadoop infrastructure, then Hive or HBase wouldn't be acceptable. I agree a SQL Server might struggle with large queries... Out of the options listed, that narrows it down to BigQuery.\nIf you want to explore alternative solutions in the same space, Apache Pinot or Druid support analytical use cases.\nOtherwise, throw files (as parquet or ORC) into GCS and use pyspark\n"
] | [
0
] | [] | [] | [
"avro",
"hbase",
"hive",
"parquet",
"python"
] | stackoverflow_0074655522_avro_hbase_hive_parquet_python.txt |
Q:
Circularity Calculation with Perimeter & Area of a Simple Circle
Circularity signifies the comparability of the shape to a circle.
A measure of circularity is the shape area to the circle area ratio having an identical perimeter (we denote it as Circle Area) as represented in equation below.
Sample Circularity = Sample Area / Circle Area
Let the perimeter of shape be P, so
P = 2 * pi * r
then
P^2 = 4 * pi^2 r^2 = 4 * pi * (pi * r^2) = 4 * pi * Circle Area. Thus
Circle Area = Sample Perimeter^2 / (4 * pi)
which implies
Sample Circularity = (4 * pi * Sample Area) / (Sample Perimeter^2)
So with help of math, there is no need to find an algorithm to calculate fit circle or draw it on a right way over shape or etc.
This statistic equals 1 for a circular object and less than 1 for an object that departs from circularity, except that it is relatively insensitive to irregular boundaries.
ok, that's fine, but ... .
In python i try calculate circularity for a simple circle but always i got 1.11. My python approach is:
import cv2
import math
Gray_image = cv2.imread(Input_Path, cv2.IMREAD_GRAYSCALE)
cnt , her = cv2.findContours(Gray_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
Perimeter = cv2.arcLength(cnt[0], True)
Area = cv2.contourArea(cnt[0])
Circularity = math.pow(Perimeter, 2) / (4 * math.pi * Area)
print(round(Circularity , 2))
If i use
Perimeter = len(cnt[0])
then answer is 0.81 which is incorrect again. Thank you for taking the time to answer.
To draw a circle, use following command:
import cv2
import numpy as np
Fill_Circle = np.zeros((1000, 1000, 3))
cv2.circle(Fill_Circle, (500, 500), 450, (255, 255, 255), -1)
cv2.imwrite(Path_to_Save, Fill_Circle)
A:
As I mentioned in this recent answer to a related question, OpenCV's perimeter estimate is not good enough to compute the circularity feature. OpenCV computes the perimeter by adding up all the distances between vertices of the polygon built from the edge pixels of the image. This length is typically larger than the actual perimeter of the actual object imaged. This blog post of mine describes the problem well, and provides a better way to estimate the perimeter of an object from a binary image.
This better method is implemented (among other places) in DIPlib, in the function dip.MeasurementTool.Measure(), as the feature "Perimeter". [Disclosure: I'm an author of DIPlib].
The feature "Roundness" implements what you refer to a circularity here (these feature names are used interchangeably in the literature). There is a different feature referred to as "Circularity" in DIPlib, which does not depend on the perimeter and typically is more precise if the shape is close to a circle.
This is how you would use that function:
import diplib as dip
import cv2
import numpy as np
Fill_Circle = np.zeros((1000, 1000, 3))
cv2.circle(Fill_Circle, (500, 500), 450, (255, 255, 255), -1)
labels = dip.Label(Fill_Circle[:, :, 0] > 0)
msr = dip.MeasurementTool.Measure(labels, features=["Perimeter", "Size", "Roundness", "Circularity"])
print(msr)
Circularity = msr[1]["Roundness"][0]
For your circle, I see:
area = 636121.0
perimeter = 2829.27
roundness = 0.9986187 (this is what you refer to as circularity)
circularity = 0.0005368701 (closer to 0 means more like a circle)
| Circularity Calculation with Perimeter & Area of a Simple Circle | Circularity signifies the comparability of the shape to a circle.
A measure of circularity is the shape area to the circle area ratio having an identical perimeter (we denote it as Circle Area) as represented in equation below.
Sample Circularity = Sample Area / Circle Area
Let the perimeter of shape be P, so
P = 2 * pi * r
then
P^2 = 4 * pi^2 r^2 = 4 * pi * (pi * r^2) = 4 * pi * Circle Area. Thus
Circle Area = Sample Perimeter^2 / (4 * pi)
which implies
Sample Circularity = (4 * pi * Sample Area) / (Sample Perimeter^2)
So with help of math, there is no need to find an algorithm to calculate fit circle or draw it on a right way over shape or etc.
This statistic equals 1 for a circular object and less than 1 for an object that departs from circularity, except that it is relatively insensitive to irregular boundaries.
ok, that's fine, but ... .
In python i try calculate circularity for a simple circle but always i got 1.11. My python approach is:
import cv2
import math
Gray_image = cv2.imread(Input_Path, cv2.IMREAD_GRAYSCALE)
cnt , her = cv2.findContours(Gray_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
Perimeter = cv2.arcLength(cnt[0], True)
Area = cv2.contourArea(cnt[0])
Circularity = math.pow(Perimeter, 2) / (4 * math.pi * Area)
print(round(Circularity , 2))
If i use
Perimeter = len(cnt[0])
then answer is 0.81 which is incorrect again. Thank you for taking the time to answer.
To draw a circle, use following command:
import cv2
import numpy as np
Fill_Circle = np.zeros((1000, 1000, 3))
cv2.circle(Fill_Circle, (500, 500), 450, (255, 255, 255), -1)
cv2.imwrite(Path_to_Save, Fill_Circle)
| [
"As I mentioned in this recent answer to a related question, OpenCV's perimeter estimate is not good enough to compute the circularity feature. OpenCV computes the perimeter by adding up all the distances between vertices of the polygon built from the edge pixels of the image. This length is typically larger than the actual perimeter of the actual object imaged. This blog post of mine describes the problem well, and provides a better way to estimate the perimeter of an object from a binary image.\nThis better method is implemented (among other places) in DIPlib, in the function dip.MeasurementTool.Measure(), as the feature \"Perimeter\". [Disclosure: I'm an author of DIPlib].\nThe feature \"Roundness\" implements what you refer to a circularity here (these feature names are used interchangeably in the literature). There is a different feature referred to as \"Circularity\" in DIPlib, which does not depend on the perimeter and typically is more precise if the shape is close to a circle.\nThis is how you would use that function:\nimport diplib as dip\nimport cv2\nimport numpy as np\n\nFill_Circle = np.zeros((1000, 1000, 3))\ncv2.circle(Fill_Circle, (500, 500), 450, (255, 255, 255), -1)\n\nlabels = dip.Label(Fill_Circle[:, :, 0] > 0)\nmsr = dip.MeasurementTool.Measure(labels, features=[\"Perimeter\", \"Size\", \"Roundness\", \"Circularity\"])\nprint(msr)\n\nCircularity = msr[1][\"Roundness\"][0]\n\nFor your circle, I see:\n\narea = 636121.0\nperimeter = 2829.27\nroundness = 0.9986187 (this is what you refer to as circularity)\ncircularity = 0.0005368701 (closer to 0 means more like a circle)\n\n"
] | [
1
] | [] | [] | [
"image_processing",
"opencv",
"python"
] | stackoverflow_0074580811_image_processing_opencv_python.txt |
Q:
updating options in optionmenu Tkinter
I am currently writing on a small hobby projekt and i have a problem concerning my list "dice" while using the dropdown menu it only ever shows the first itteration of the list (the single 0) but it is sopposed to be updated in the dropdown menue after each press of the "roll the dice" button. How do i do that?
from random import randint
from tkinter import *
root = Tk()
root.title('Hobbyprojekt')
count = -1
global dice
dice = [0]
prpp= IntVar()
diceshow=Label()
#defining funtions for buttons
def roll():
global count
global diceshow
global dice
count +=1
print(count)
if count >= 1:
dice=[]
for x in range (0,7) :
dice.append(randint(1,10))
#calculating the viable dice options
for x in range (0,2) :
dice.remove(min(dice))
if count >= 1:
diceshow.destroy()
print("destroyed")
diceshow=Label(root, text=dice)
diceshow.grid(row=0,column=1)
print(dice)
print(dice[1])
print(dice[2])
print(dice[3])
#setting up the test gui
button1 = Button(root, text='Roll the dice', command=roll)
label1= Label(text='choice1')
label2= Label(text='choice2')
label3= Label(text='choice3')
label4= Label(text='choice4')
label5= Label(text='choice5')
label6= Label(text='choice6')
dd1= OptionMenu(root,prpp,*dice)
dd2= OptionMenu(root,prpp,*dice)
dd3= OptionMenu(root,prpp,*dice)
dd4= OptionMenu(root,prpp,*dice)
dd5= OptionMenu(root,prpp,*dice)
dd6= OptionMenu(root,prpp,*dice)
#setting layout
button1.grid(row=0,column=0)
label1.grid(row=1,column=0)
label2.grid(row=2,column=0)
label3.grid(row=3,column=0)
label4.grid(row=4,column=0)
label5.grid(row=5,column=0)
label6.grid(row=6,column=0)
dd1.grid(row=1, column=1)
dd2.grid(row=2,column=1)
dd3.grid(row=3,column=1)
dd4.grid(row=4,column=1)
dd5.grid(row=5,column=1)
dd6.grid(row=6,column=1)
root.mainloop()
So i'm acctually lost for ideas on what to do since i am fairly new to python. Only thing i could think of is creating the dropdown menus within the "diceroll" button definition but that is not reall what would want to do. Thanks in advance.
edit: fixed spelling.
A:
Is this is what you want? I moved optionmenu into roll() function
from random import randint
from tkinter import *
root = Tk()
root.title('Hobbyprojekt')
count = -1
#global dice
dice = [0]
prpp= IntVar()
diceshow=Label()
#defining funtions for buttons
def roll():
global count
global diceshow
global dice
count +=1
print(count)
if count >= 1:
dice=[]
for x in range (0,7) :
dice.append(randint(1,10))
#calculating the viable dice options
for x in range (0,2) :
dice.remove(min(dice))
if count >= 1:
diceshow.destroy()
print("destroyed")
diceshow=Label(root, text=dice)
diceshow.grid(row=0,column=1)
print(dice)
print(dice[1])
print(dice[2])
print(dice[3])
dd1= OptionMenu(root,prpp,dice[0])
dd2= OptionMenu(root,prpp,dice[1])
dd3= OptionMenu(root,prpp,dice[2])
dd4= OptionMenu(root,prpp,dice[3])
dd5= OptionMenu(root,prpp,dice[4])
dd6= OptionMenu(root,prpp,dice[5])
dd1.grid(row=1, column=1)
dd2.grid(row=2,column=1)
dd3.grid(row=3,column=1)
dd4.grid(row=4,column=1)
dd5.grid(row=5,column=1)
dd6.grid(row=6,column=1)
#setting up the test gui
button1 = Button(root, text='Roll the dice', command=roll)
label1= Label(text='choice1')
label2= Label(text='choice2')
label3= Label(text='choice3')
label4= Label(text='choice4')
label5= Label(text='choice5')
label6= Label(text='choice6')
#setting layout
button1.grid(row=0,column=0)
label1.grid(row=1,column=0)
label2.grid(row=2,column=0)
label3.grid(row=3,column=0)
label4.grid(row=4,column=0)
label5.grid(row=5,column=0)
label6.grid(row=6,column=0)
root.mainloop()
Result before:
Result after:
| updating options in optionmenu Tkinter | I am currently writing on a small hobby projekt and i have a problem concerning my list "dice" while using the dropdown menu it only ever shows the first itteration of the list (the single 0) but it is sopposed to be updated in the dropdown menue after each press of the "roll the dice" button. How do i do that?
from random import randint
from tkinter import *
root = Tk()
root.title('Hobbyprojekt')
count = -1
global dice
dice = [0]
prpp= IntVar()
diceshow=Label()
#defining funtions for buttons
def roll():
global count
global diceshow
global dice
count +=1
print(count)
if count >= 1:
dice=[]
for x in range (0,7) :
dice.append(randint(1,10))
#calculating the viable dice options
for x in range (0,2) :
dice.remove(min(dice))
if count >= 1:
diceshow.destroy()
print("destroyed")
diceshow=Label(root, text=dice)
diceshow.grid(row=0,column=1)
print(dice)
print(dice[1])
print(dice[2])
print(dice[3])
#setting up the test gui
button1 = Button(root, text='Roll the dice', command=roll)
label1= Label(text='choice1')
label2= Label(text='choice2')
label3= Label(text='choice3')
label4= Label(text='choice4')
label5= Label(text='choice5')
label6= Label(text='choice6')
dd1= OptionMenu(root,prpp,*dice)
dd2= OptionMenu(root,prpp,*dice)
dd3= OptionMenu(root,prpp,*dice)
dd4= OptionMenu(root,prpp,*dice)
dd5= OptionMenu(root,prpp,*dice)
dd6= OptionMenu(root,prpp,*dice)
#setting layout
button1.grid(row=0,column=0)
label1.grid(row=1,column=0)
label2.grid(row=2,column=0)
label3.grid(row=3,column=0)
label4.grid(row=4,column=0)
label5.grid(row=5,column=0)
label6.grid(row=6,column=0)
dd1.grid(row=1, column=1)
dd2.grid(row=2,column=1)
dd3.grid(row=3,column=1)
dd4.grid(row=4,column=1)
dd5.grid(row=5,column=1)
dd6.grid(row=6,column=1)
root.mainloop()
So i'm acctually lost for ideas on what to do since i am fairly new to python. Only thing i could think of is creating the dropdown menus within the "diceroll" button definition but that is not reall what would want to do. Thanks in advance.
edit: fixed spelling.
| [
"Is this is what you want? I moved optionmenu into roll() function\nfrom random import randint\nfrom tkinter import *\n\nroot = Tk()\nroot.title('Hobbyprojekt')\n\ncount = -1\n#global dice\ndice = [0]\nprpp= IntVar() \ndiceshow=Label()\n#defining funtions for buttons \ndef roll():\n global count\n global diceshow\n global dice\n count +=1\n print(count)\n if count >= 1:\n dice=[]\n for x in range (0,7) :\n dice.append(randint(1,10))\n \n #calculating the viable dice options\n for x in range (0,2) :\n dice.remove(min(dice))\n\n if count >= 1:\n diceshow.destroy()\n print(\"destroyed\")\n \n diceshow=Label(root, text=dice)\n diceshow.grid(row=0,column=1)\n print(dice)\n print(dice[1])\n print(dice[2])\n print(dice[3])\n dd1= OptionMenu(root,prpp,dice[0])\n dd2= OptionMenu(root,prpp,dice[1])\n dd3= OptionMenu(root,prpp,dice[2])\n dd4= OptionMenu(root,prpp,dice[3])\n dd5= OptionMenu(root,prpp,dice[4])\n dd6= OptionMenu(root,prpp,dice[5])\n\n dd1.grid(row=1, column=1)\n dd2.grid(row=2,column=1)\n dd3.grid(row=3,column=1)\n dd4.grid(row=4,column=1)\n dd5.grid(row=5,column=1)\n dd6.grid(row=6,column=1)\n\n \n\n#setting up the test gui\nbutton1 = Button(root, text='Roll the dice', command=roll)\nlabel1= Label(text='choice1')\nlabel2= Label(text='choice2')\nlabel3= Label(text='choice3')\nlabel4= Label(text='choice4')\nlabel5= Label(text='choice5')\nlabel6= Label(text='choice6')\n \n#setting layout\nbutton1.grid(row=0,column=0)\n\nlabel1.grid(row=1,column=0)\nlabel2.grid(row=2,column=0)\nlabel3.grid(row=3,column=0)\nlabel4.grid(row=4,column=0)\nlabel5.grid(row=5,column=0)\nlabel6.grid(row=6,column=0)\n \n\nroot.mainloop()\n\nResult before:\n\nResult after:\n\n"
] | [
0
] | [] | [] | [
"optionmenu",
"python",
"tkinter",
"updating"
] | stackoverflow_0074625320_optionmenu_python_tkinter_updating.txt |
Q:
Python 3.11 base64 error " a bytes-like object is required, not 'list' "
Im tryna make a very basic password manager kinda program that's about as basic as it gets and am using base64 to encode the passwords that are getting saved , but using `
encode = base64.b64encode(read_output).encode("utf-8")
print("Encrypted key: ",encode)
decode = base64.b64decode(encode).decode("utf-8")
print(decode)
gives me an error ;
File "c:\Users\Someone\OneDrive\Documents\VS Codium\pswrdmgr.py", line 152, in <module>
encode = base64.b64encode(read_output).encode("utf-8")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Someone\AppData\Local\Programs\Python\Python311\Lib\base64.py", line 58, in b64encode
encoded = binascii.b2a_base64(s, newline=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: a bytes-like object is required, not 'list'
`
Any suggestions ? Any help is much appreciated !
I tried using other containers like a dictionary and tuples thinking they might be the issue that's troubling base64 but the problem remains ..
A:
You should first encode, then pass it to the function:
(assuming read_output is of type list. It will also work with all of the basic types objects)
encode = base64.b64encode(str(read_output).encode("utf-8"))
print("Encrypted key: ",encode)
decode = base64.b64decode(encode).decode("utf-8")
print(decode) ```
| Python 3.11 base64 error " a bytes-like object is required, not 'list' " | Im tryna make a very basic password manager kinda program that's about as basic as it gets and am using base64 to encode the passwords that are getting saved , but using `
encode = base64.b64encode(read_output).encode("utf-8")
print("Encrypted key: ",encode)
decode = base64.b64decode(encode).decode("utf-8")
print(decode)
gives me an error ;
File "c:\Users\Someone\OneDrive\Documents\VS Codium\pswrdmgr.py", line 152, in <module>
encode = base64.b64encode(read_output).encode("utf-8")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Someone\AppData\Local\Programs\Python\Python311\Lib\base64.py", line 58, in b64encode
encoded = binascii.b2a_base64(s, newline=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: a bytes-like object is required, not 'list'
`
Any suggestions ? Any help is much appreciated !
I tried using other containers like a dictionary and tuples thinking they might be the issue that's troubling base64 but the problem remains ..
| [
"You should first encode, then pass it to the function:\n(assuming read_output is of type list. It will also work with all of the basic types objects)\nencode = base64.b64encode(str(read_output).encode(\"utf-8\"))\nprint(\"Encrypted key: \",encode)\ndecode = base64.b64decode(encode).decode(\"utf-8\")\nprint(decode) ```\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074677941_python_python_3.x.txt |
Q:
How to view Gekko variables/parameters for debug purposes?
I have a fitting task where I am using GEKKO.
There are a lot of variables, arrays of variables, some variables that must contain arrays, and so on.
I didn't have success with the fitting,
so I need to do step-by-step verification of all parameters that I am providing for GEKKO and all the calculated intermediate values.
Is there a way to print out the values of each variable for debugging purposes?
Or to view the values of the variables in line-by-line execution?
for example, I have an array that is saved like a variable ro:
phi = model.Intermediate( c * ro) # phase shift
where c is some constant defined somewhere above in the model definition.
How can I view the values inside phi that will be used in the next steps?
I need to view/save all the values of all variables/constants/intermediates used during the model creation - before a try to solve. Is it possible?
A:
Turn up the DIAGLEVEL to 2 or higher to produce diagnostic files in the run directory m.path.
from gekko import GEKKO
m = GEKKO(remote=False)
c = 2
x = m.Param(3,name='x')
ro = m.Var(value=4,lb=0,ub=10,name='ro')
y = m.Var()
phi = m.Intermediate(c*ro,name='phi')
m.Equation(y==phi**2+x)
m.Maximize(y)
m.options.SOLVER = 1
m.options.DIAGLEVEL=2
m.open_folder()
m.solve()
Here is a summary of the diagnostic files that are produced:
Variables, Equations, Jacobian, Lagrange Multipliers, Objective
apm_eqn.txt
apm_jac.txt
apm_jac_fv.txt
apm_lam.txt
apm_lbt.txt
apm_obj.txt
apm_obj_grad.txt
apm_var.txt
Solver Output and Options
APOPT.out
apopt_current_options.opt
Model File
gk_model0.apm
Data File
gk_model0.csv
Options Files
gk_model0.dbs
options.json
Specification File for FV, MV, SV, CV
gk_model0.info
Inputs to the Model
dbs_read.rpt
input_defaults.dbs
input_gk_model0.dbs
input_measurements.dbs
input_overrides.dbs
measurements.dbs
Results
rto.t0
results.csv
results.json
gk_model0_r_2022y12m04d08h12m28.509s.t0
Initialization Steps Before Solve
rto_1.t0
rto_2.t0
rto_3.t0
rto_3_eqn.txt
rto_3_eqn_var.txt
rto_3_var.t0
Reports After Solve
rto_4.t0
rto_4_eqn.txt
rto_4_eqn_var.txt
rto_4_var.t0
The files of interest for you are likely the rto* initialization files. The name changes based on the IMODE that you run. It is mpu* for your application for a Model Parameter Update with IMODE=2.
| How to view Gekko variables/parameters for debug purposes? | I have a fitting task where I am using GEKKO.
There are a lot of variables, arrays of variables, some variables that must contain arrays, and so on.
I didn't have success with the fitting,
so I need to do step-by-step verification of all parameters that I am providing for GEKKO and all the calculated intermediate values.
Is there a way to print out the values of each variable for debugging purposes?
Or to view the values of the variables in line-by-line execution?
for example, I have an array that is saved like a variable ro:
phi = model.Intermediate( c * ro) # phase shift
where c is some constant defined somewhere above in the model definition.
How can I view the values inside phi that will be used in the next steps?
I need to view/save all the values of all variables/constants/intermediates used during the model creation - before a try to solve. Is it possible?
| [
"Turn up the DIAGLEVEL to 2 or higher to produce diagnostic files in the run directory m.path.\nfrom gekko import GEKKO\nm = GEKKO(remote=False)\nc = 2\nx = m.Param(3,name='x')\nro = m.Var(value=4,lb=0,ub=10,name='ro')\ny = m.Var()\nphi = m.Intermediate(c*ro,name='phi')\nm.Equation(y==phi**2+x)\nm.Maximize(y)\nm.options.SOLVER = 1\nm.options.DIAGLEVEL=2\nm.open_folder()\nm.solve()\n\nHere is a summary of the diagnostic files that are produced:\nVariables, Equations, Jacobian, Lagrange Multipliers, Objective\n\napm_eqn.txt\napm_jac.txt\napm_jac_fv.txt\napm_lam.txt\napm_lbt.txt\napm_obj.txt\napm_obj_grad.txt\napm_var.txt\n\nSolver Output and Options\n\nAPOPT.out\napopt_current_options.opt\n\nModel File\n\ngk_model0.apm\n\nData File\n\ngk_model0.csv\n\nOptions Files\n\ngk_model0.dbs\noptions.json\n\nSpecification File for FV, MV, SV, CV\n\ngk_model0.info\n\nInputs to the Model\n\ndbs_read.rpt\ninput_defaults.dbs\ninput_gk_model0.dbs\ninput_measurements.dbs\ninput_overrides.dbs\nmeasurements.dbs\n\nResults\n\nrto.t0\nresults.csv\nresults.json\ngk_model0_r_2022y12m04d08h12m28.509s.t0\n\nInitialization Steps Before Solve\n\nrto_1.t0\nrto_2.t0\nrto_3.t0\nrto_3_eqn.txt\nrto_3_eqn_var.txt\nrto_3_var.t0\n\nReports After Solve\n\nrto_4.t0\nrto_4_eqn.txt\nrto_4_eqn_var.txt\nrto_4_var.t0\n\nThe files of interest for you are likely the rto* initialization files. The name changes based on the IMODE that you run. It is mpu* for your application for a Model Parameter Update with IMODE=2.\n"
] | [
1
] | [] | [] | [
"gekko",
"optimization",
"python"
] | stackoverflow_0074677526_gekko_optimization_python.txt |
Q:
Assign group number for each row, based on columns value ranges
I have some data, that needs to be clusterised into groups. That should be done by a few predifined conditions.
Suppose we have the following table:
d = {'ID': [100, 101, 102, 103, 104, 105],
'col_1': [12, 3, 7, 13, 19, 25],
'col_2': [3, 1, 3, 3, 2, 4]
}
df = pd.DataFrame(data=d)
df.head()
Here, I want to group ID based on the following ranges, conditions, on col_1 and col_2.
For col_1 I divide values into following groups: [0, 10], [11, 15], [16, 20], [20, +inf]
For col_2 just use the df['col_2'].unique() values: [1], [2], [3], [4].
The desired groupping is in group_num column:
notice, that 0 and 3 rows have the same group number and the order, in which group number is assigned.
For now, I only came up with if-elif function to pre-define all the groups. It's not the solution for now cause in my real task there are far more ranges and confitions.
My code snippet, if it's relevant:
# This logic is not working cause here I have to predefine all the groups configurations, aka numbers,
# but I want to make groups "dymanicly":
# first group created and if the next row is not in that group -> create new one
def groupping(val_1, val_2):
# not using match case here, cause my Python < 3.10
if ((val_1 >= 0) and (val_1 <10)) and (val_2 == 1):
return 1
elif ((val_1 >= 0) and (val_1 <10)) and (val_2 == 2):
return 2
elif ...
...
df['group_num'] = df.apply(lambda x: groupping(x.col_1, x.col_2), axis=1)
A:
Not sure I understand the full logic, can't you use pandas.cut:
bins = [0, 10, 15, 20, np.inf]
df['group_num'] = pd.cut(df['col_1'], bins=bins,
labels=range(1, len(bins)))
Output:
ID col_1 col_2 group_num
0 100 12 3 2
1 101 3 1 1
2 102 7 3 1
3 103 13 2 2
4 104 19 3 3
5 105 25 4 4
A:
make dataframe for chking group
bins = [0, 10, 15, 20, float('inf')]
df1 = df[['col_1', 'col_2']].assign(col_1=pd.cut(df['col_1'], bins=bins, right=False)).sort_values(['col_1', 'col_2'])
df1
col_1 col_2
1 [0.0, 10.0) 1
2 [0.0, 10.0) 3
0 [10.0, 15.0) 3
3 [10.0, 15.0) 3
4 [15.0, 20.0) 2
5 [20.0, inf) 4
chk group by df1
df1.ne(df1.shift(1)).any(axis=1).cumsum()
output:
1 1
2 2
0 3
3 3
4 4
5 5
dtype: int32
make output to group_num column
df.assign(group_num=df1.ne(df1.shift(1)).any(axis=1).cumsum())
result:
ID col_1 col_2 group_num
0 100 12 3 3
1 101 3 1 1
2 102 7 3 2
3 103 13 3 3
4 104 19 2 4
5 105 25 4 5
| Assign group number for each row, based on columns value ranges | I have some data, that needs to be clusterised into groups. That should be done by a few predifined conditions.
Suppose we have the following table:
d = {'ID': [100, 101, 102, 103, 104, 105],
'col_1': [12, 3, 7, 13, 19, 25],
'col_2': [3, 1, 3, 3, 2, 4]
}
df = pd.DataFrame(data=d)
df.head()
Here, I want to group ID based on the following ranges, conditions, on col_1 and col_2.
For col_1 I divide values into following groups: [0, 10], [11, 15], [16, 20], [20, +inf]
For col_2 just use the df['col_2'].unique() values: [1], [2], [3], [4].
The desired groupping is in group_num column:
notice, that 0 and 3 rows have the same group number and the order, in which group number is assigned.
For now, I only came up with if-elif function to pre-define all the groups. It's not the solution for now cause in my real task there are far more ranges and confitions.
My code snippet, if it's relevant:
# This logic is not working cause here I have to predefine all the groups configurations, aka numbers,
# but I want to make groups "dymanicly":
# first group created and if the next row is not in that group -> create new one
def groupping(val_1, val_2):
# not using match case here, cause my Python < 3.10
if ((val_1 >= 0) and (val_1 <10)) and (val_2 == 1):
return 1
elif ((val_1 >= 0) and (val_1 <10)) and (val_2 == 2):
return 2
elif ...
...
df['group_num'] = df.apply(lambda x: groupping(x.col_1, x.col_2), axis=1)
| [
"Not sure I understand the full logic, can't you use pandas.cut:\nbins = [0, 10, 15, 20, np.inf]\ndf['group_num'] = pd.cut(df['col_1'], bins=bins,\n labels=range(1, len(bins)))\n\nOutput:\n ID col_1 col_2 group_num\n0 100 12 3 2\n1 101 3 1 1\n2 102 7 3 1\n3 103 13 2 2\n4 104 19 3 3\n5 105 25 4 4\n\n",
"make dataframe for chking group\nbins = [0, 10, 15, 20, float('inf')]\ndf1 = df[['col_1', 'col_2']].assign(col_1=pd.cut(df['col_1'], bins=bins, right=False)).sort_values(['col_1', 'col_2'])\n\ndf1\n col_1 col_2\n1 [0.0, 10.0) 1\n2 [0.0, 10.0) 3\n0 [10.0, 15.0) 3\n3 [10.0, 15.0) 3\n4 [15.0, 20.0) 2\n5 [20.0, inf) 4\n\n\nchk group by df1\ndf1.ne(df1.shift(1)).any(axis=1).cumsum()\n\noutput:\n1 1\n2 2\n0 3\n3 3\n4 4\n5 5\ndtype: int32\n\n\nmake output to group_num column\ndf.assign(group_num=df1.ne(df1.shift(1)).any(axis=1).cumsum())\n\nresult:\n ID col_1 col_2 group_num\n0 100 12 3 3\n1 101 3 1 1\n2 102 7 3 2\n3 103 13 3 3\n4 104 19 2 4\n5 105 25 4 5\n\n"
] | [
2,
2
] | [] | [] | [
"group_by",
"grouping",
"lambda",
"pandas",
"python"
] | stackoverflow_0074677294_group_by_grouping_lambda_pandas_python.txt |
Q:
print contents of remaining tags after a tag beautifulsoup
i just printed all the contents of li using .find_all('li') and i want to continue printing 'p' tags after li tag ends, like not 'p' tags in the beginning of html or inbetween. 'p' tags or remaining tags at the end. please help. Basically need everything after final list-end tag.
from bs4 import BeautifulSoup
html_doc = """\
<html>
<p>
don't need this
</p>
<li>
text i need
</li>
<li>
<p>
don't need this
</p>
<p>
don't need this
</p>
<li>
text i need
<ol>
<li>
text i need but appended to parent li tag
</li>
<li>
text i need but appended to parent li tag
</li>
</ol>
</li>
<li>
text i need
</li>
<p>
also need this
</p>
<p>
and this
</p>
<p>
and this too
</p>"""
soup = BeautifulSoup(html_doc, "html.parser")
for li in soup.select("li"):
if li.find_parent("li"):
continue
print(" ".join(li.text.split()))
print("--sep--")
this prints
text i need
--sep--
text i need text i need but appended to parent li tag text i need but appended to parent li tag
--sep--
text i need
--sep--
thanks to @Andrej Kesely
i need this
text i need
--sep--
text i need text i need but appended to parent li tag text i need but appended to parent li tag
--sep--
text i need
--sep--
also need this
--sep--
and this
--sep--
and this too
--sep--
A:
You can try this:
for li in soup.select("li:not(li li)"):
print(" ".join([
d.get_text().strip() for d in li.descendants
if 'NavigableString' in str(type(d)) and
d.parent.name == 'li' and d.get_text().strip()
]))
print("--sep--")
# for the p tags after ANY of the [outermost] li tags
for p in soup.select("li:not(li li) ~ p"): print(p.text.strip(), "\n--sep--")
(Using :not(li li) lets you not need the if li.find_parent("li"): continue part.)
This should get you
the text from [outermost] li tags, but only made up of strings that are directly inside that li tag or an li tag inside it
and then
text from p tags that are sibling to a preceding outermost li tag. (If you want only p tags after the last li, use for p in soup.select("li:not(li li) ~ p:not(:has(~ li))")...)
| print contents of remaining tags after a tag beautifulsoup | i just printed all the contents of li using .find_all('li') and i want to continue printing 'p' tags after li tag ends, like not 'p' tags in the beginning of html or inbetween. 'p' tags or remaining tags at the end. please help. Basically need everything after final list-end tag.
from bs4 import BeautifulSoup
html_doc = """\
<html>
<p>
don't need this
</p>
<li>
text i need
</li>
<li>
<p>
don't need this
</p>
<p>
don't need this
</p>
<li>
text i need
<ol>
<li>
text i need but appended to parent li tag
</li>
<li>
text i need but appended to parent li tag
</li>
</ol>
</li>
<li>
text i need
</li>
<p>
also need this
</p>
<p>
and this
</p>
<p>
and this too
</p>"""
soup = BeautifulSoup(html_doc, "html.parser")
for li in soup.select("li"):
if li.find_parent("li"):
continue
print(" ".join(li.text.split()))
print("--sep--")
this prints
text i need
--sep--
text i need text i need but appended to parent li tag text i need but appended to parent li tag
--sep--
text i need
--sep--
thanks to @Andrej Kesely
i need this
text i need
--sep--
text i need text i need but appended to parent li tag text i need but appended to parent li tag
--sep--
text i need
--sep--
also need this
--sep--
and this
--sep--
and this too
--sep--
| [
"You can try this:\nfor li in soup.select(\"li:not(li li)\"): \n print(\" \".join([\n d.get_text().strip() for d in li.descendants \n if 'NavigableString' in str(type(d)) and \n d.parent.name == 'li' and d.get_text().strip()\n ])) \n print(\"--sep--\")\n\n# for the p tags after ANY of the [outermost] li tags\nfor p in soup.select(\"li:not(li li) ~ p\"): print(p.text.strip(), \"\\n--sep--\") \n\n(Using :not(li li) lets you not need the if li.find_parent(\"li\"): continue part.)\nThis should get you\n\nthe text from [outermost] li tags, but only made up of strings that are directly inside that li tag or an li tag inside it\n\nand then\n\ntext from p tags that are sibling to a preceding outermost li tag. (If you want only p tags after the last li, use for p in soup.select(\"li:not(li li) ~ p:not(:has(~ li))\")...)\n\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"html",
"python",
"python_3.x"
] | stackoverflow_0074677516_beautifulsoup_html_python_python_3.x.txt |
Q:
Python: Scheduling cron jobs with time limit?
I have been using apscheduler. A recurring problem regarding the package is that if for any reason, a running job hangs indefinitely (for example if you create an infinite while loop inside of it) it will stop the whole process forever as there is no time limit option for the added jobs.
Apscheduler has stated multiple times that they will not add a timelimit due to various reasons (short explanation here), however the problem still remains. You could create a job that will run for days, only to stop because a webrequest gets no response and apscheduler will wait for it indefinitely.
I've been trying to find a way to add this time limit to a job. For example using the wrapt-timeout-decorator package. I would create a function which runs my job inside it, that has a time limit, and I add this function to aposcheduler. Unfortunately, the two packages collide with a circular import.
from wrapt_timeout_decorator.wrapt_timeout_decorator import timeout
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
class MyJob: # implementation is unnecessary to show here
...
@timeout(dec_timeout=600, use_signals=False)
def run_job(job: MyJob) -> None:
job.run()
job = MyJob()
scheduler = BackgroundScheduler(daemon=True)
scheduler.add_job(func=run_job, kwargs={"job": job}, trigger=CronTrigger.from_crontab(sheet_job.cron))
scheduler.start()
File
"C:\Users...\AppData\Local\Programs\Python\Python39\lib\site-packages\multiprocess\context.py",
line 62, in Pipe
from .connection import Pipe ImportError: cannot import name 'Pipe' from partially initialized module 'multiprocess.connection'
(most likely due to a circular import)
(C:\Users...\AppData\Local\Programs\Python\Python39\lib\site-packages\multiprocess\connection.py)
I've also tried adding a self made timeout decorator, shown here, but I did not get the desired outcome.
My question is: Is there a way to add a time limit to an apscheduler job, or are there any other similar packages where creating a cron job with a time limit is possible, or do you know of any self-made solution? (the program will run on Windows).
A:
Based on the number of answers and my own research this is not currently possible with apscheduler. I have written my own quick implementation. The syntax is very similar to apscheduler, you just need to create a similar Scheduler object and add jobs to it with add_job, then use start. For my needs this has solved the issue. I'm adding the implementation here as it may help somebody the future.
from typing import Callable, Optional, Any
from datetime import datetime, timedelta
from croniter import croniter
from enum import Enum
import traceback
import threading
import ctypes
import time
class JobStatus(Enum):
NOT_RUNNING = "Not running"
RUNNING = "Running"
class StoppableThread(threading.Thread):
def get_id(self):
if hasattr(self, '_thread_id'):
return self._thread_id
for id, thread in threading._active.items():
if thread is self:
return id
return None
def stop(self):
thread_id = self.get_id()
if thread_id is None:
print("Failed find thread id. Unable to stop thread.")
return
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit))
if res > 1:
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0)
print("Failed to stop thread.")
class JobRunner:
def __init__(self, function: Callable[..., None], cron_tab: str, function_kwargs: Optional[dict[str, Any]]=None, timeout_minutes: Optional[int]=None) -> None:
self.function = function
self.cron_tab = cron_tab
self.function_kwargs = function_kwargs if function_kwargs is not None else {}
self.timeout_minutes = timeout_minutes
self.next_run_time = datetime.now()
self.next_timeout_time = None if timeout_minutes is None else datetime.now() + timedelta(minutes=timeout_minutes)
self._job_thread: Optional[StoppableThread] = None
self._update_next_run_time()
def update(self) -> None:
if self.get_job_status() == JobStatus.RUNNING:
if self.timeout_minutes is not None:
if datetime.now() < self.next_timeout_time:
print(f"Job stopped due to timeout after not finishing in {self.timeout_minutes} minutes.")
self._job_thread.stop()
self._job_thread.join()
self._job_thread = None
return
if datetime.now() < self.next_run_time:
return
self._job_thread = StoppableThread(target=self.function, kwargs=self.function_kwargs)
self._job_thread.start()
self._update_next_run_time()
self._update_next_timeout()
def get_job_status(self) -> JobStatus:
if self._job_thread is None:
return JobStatus.NOT_RUNNING
if self._job_thread.is_alive():
return JobStatus.RUNNING
return JobStatus.NOT_RUNNING
def _update_next_run_time(self) -> None:
cron = croniter(self.cron_tab, datetime.now())
self.next_run_time = cron.get_next(datetime)
def _update_next_timeout(self) -> None:
if self.timeout_minutes is not None:
self.next_timeout_time = datetime.now() + timedelta(minutes=self.timeout_minutes)
class Scheduler:
def __init__(self) -> None:
self._jobs: list[JobRunner] = []
def add_job(self, function: Callable[..., None], cron_tab: str, function_kwargs: Optional[dict[str, Any]]=None, timeout_minutes: Optional[int]=None) -> None:
self._jobs.append(JobRunner(function, cron_tab, function_kwargs, timeout_minutes))
def start(self) -> None:
while True:
time.sleep(1)
try:
for job_runner in self._jobs:
job_runner.update()
except Exception:
print(f"An error occured while running one of the jobs: {traceback.format_exc()}")
| Python: Scheduling cron jobs with time limit? | I have been using apscheduler. A recurring problem regarding the package is that if for any reason, a running job hangs indefinitely (for example if you create an infinite while loop inside of it) it will stop the whole process forever as there is no time limit option for the added jobs.
Apscheduler has stated multiple times that they will not add a timelimit due to various reasons (short explanation here), however the problem still remains. You could create a job that will run for days, only to stop because a webrequest gets no response and apscheduler will wait for it indefinitely.
I've been trying to find a way to add this time limit to a job. For example using the wrapt-timeout-decorator package. I would create a function which runs my job inside it, that has a time limit, and I add this function to aposcheduler. Unfortunately, the two packages collide with a circular import.
from wrapt_timeout_decorator.wrapt_timeout_decorator import timeout
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
class MyJob: # implementation is unnecessary to show here
...
@timeout(dec_timeout=600, use_signals=False)
def run_job(job: MyJob) -> None:
job.run()
job = MyJob()
scheduler = BackgroundScheduler(daemon=True)
scheduler.add_job(func=run_job, kwargs={"job": job}, trigger=CronTrigger.from_crontab(sheet_job.cron))
scheduler.start()
File
"C:\Users...\AppData\Local\Programs\Python\Python39\lib\site-packages\multiprocess\context.py",
line 62, in Pipe
from .connection import Pipe ImportError: cannot import name 'Pipe' from partially initialized module 'multiprocess.connection'
(most likely due to a circular import)
(C:\Users...\AppData\Local\Programs\Python\Python39\lib\site-packages\multiprocess\connection.py)
I've also tried adding a self made timeout decorator, shown here, but I did not get the desired outcome.
My question is: Is there a way to add a time limit to an apscheduler job, or are there any other similar packages where creating a cron job with a time limit is possible, or do you know of any self-made solution? (the program will run on Windows).
| [
"Based on the number of answers and my own research this is not currently possible with apscheduler. I have written my own quick implementation. The syntax is very similar to apscheduler, you just need to create a similar Scheduler object and add jobs to it with add_job, then use start. For my needs this has solved the issue. I'm adding the implementation here as it may help somebody the future.\nfrom typing import Callable, Optional, Any\nfrom datetime import datetime, timedelta\nfrom croniter import croniter\nfrom enum import Enum\nimport traceback\nimport threading \nimport ctypes\nimport time\n\n\nclass JobStatus(Enum):\n NOT_RUNNING = \"Not running\"\n RUNNING = \"Running\"\n\n\nclass StoppableThread(threading.Thread):\n\n def get_id(self):\n if hasattr(self, '_thread_id'):\n return self._thread_id\n for id, thread in threading._active.items():\n if thread is self:\n return id\n return None\n\n def stop(self):\n thread_id = self.get_id()\n if thread_id is None:\n print(\"Failed find thread id. Unable to stop thread.\")\n return\n res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit))\n if res > 1:\n ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0)\n print(\"Failed to stop thread.\")\n\n\nclass JobRunner:\n\n def __init__(self, function: Callable[..., None], cron_tab: str, function_kwargs: Optional[dict[str, Any]]=None, timeout_minutes: Optional[int]=None) -> None:\n self.function = function\n self.cron_tab = cron_tab\n self.function_kwargs = function_kwargs if function_kwargs is not None else {}\n self.timeout_minutes = timeout_minutes\n self.next_run_time = datetime.now()\n self.next_timeout_time = None if timeout_minutes is None else datetime.now() + timedelta(minutes=timeout_minutes)\n self._job_thread: Optional[StoppableThread] = None\n self._update_next_run_time()\n\n def update(self) -> None:\n if self.get_job_status() == JobStatus.RUNNING:\n if self.timeout_minutes is not None:\n if datetime.now() < self.next_timeout_time:\n print(f\"Job stopped due to timeout after not finishing in {self.timeout_minutes} minutes.\")\n self._job_thread.stop()\n self._job_thread.join()\n self._job_thread = None\n return\n if datetime.now() < self.next_run_time:\n return\n self._job_thread = StoppableThread(target=self.function, kwargs=self.function_kwargs)\n self._job_thread.start()\n self._update_next_run_time()\n self._update_next_timeout()\n\n def get_job_status(self) -> JobStatus:\n if self._job_thread is None:\n return JobStatus.NOT_RUNNING\n if self._job_thread.is_alive():\n return JobStatus.RUNNING\n return JobStatus.NOT_RUNNING\n\n def _update_next_run_time(self) -> None:\n cron = croniter(self.cron_tab, datetime.now())\n self.next_run_time = cron.get_next(datetime)\n\n def _update_next_timeout(self) -> None:\n if self.timeout_minutes is not None:\n self.next_timeout_time = datetime.now() + timedelta(minutes=self.timeout_minutes)\n\n\nclass Scheduler:\n\n def __init__(self) -> None:\n self._jobs: list[JobRunner] = []\n\n def add_job(self, function: Callable[..., None], cron_tab: str, function_kwargs: Optional[dict[str, Any]]=None, timeout_minutes: Optional[int]=None) -> None:\n self._jobs.append(JobRunner(function, cron_tab, function_kwargs, timeout_minutes))\n\n def start(self) -> None:\n while True:\n time.sleep(1)\n try:\n for job_runner in self._jobs:\n job_runner.update()\n except Exception:\n print(f\"An error occured while running one of the jobs: {traceback.format_exc()}\")\n\n"
] | [
0
] | [] | [] | [
"apscheduler",
"cron",
"jobs",
"multithreading",
"python"
] | stackoverflow_0074524160_apscheduler_cron_jobs_multithreading_python.txt |
Q:
Tkinter canvas growing out of screen because of labels on canvas
I have a tkinter canvas where I put labels on. When too much labels are added to the canvas it grows out of the screen. How do I set a max size on the canvas?
middleCanvas = Canvas(window, bg="red", width=300, height=400)
middleCanvas.grid(column=1, row=3, sticky="N")
scroll_y.grid(column=2, row=3, sticky="NS")
middleCanvas.configure(yscrollcommand=scroll_y.set)
middleCanvas.configure(scrollregion=middleCanvas.bbox("all"))
messageLabel = Label(middleCanvas, text=line)
messageLabel.grid(column=1, row=messageRow)
Tried using a scrollbar, but the bar also goes out of screen and fills the slider.
A:
This also happened to me with buttons.
You can fix it by defining a WIDTH variable and set it to the size you want. Then set the Label width to the WIDTH variable.
For example:
WIDTH = 5
messagelabel = Label(middleCanvas, text="A very, very, very very, very long string. ", width=WIDTH)
messagelabel.grid(column=1, row=messageRow)
| Tkinter canvas growing out of screen because of labels on canvas | I have a tkinter canvas where I put labels on. When too much labels are added to the canvas it grows out of the screen. How do I set a max size on the canvas?
middleCanvas = Canvas(window, bg="red", width=300, height=400)
middleCanvas.grid(column=1, row=3, sticky="N")
scroll_y.grid(column=2, row=3, sticky="NS")
middleCanvas.configure(yscrollcommand=scroll_y.set)
middleCanvas.configure(scrollregion=middleCanvas.bbox("all"))
messageLabel = Label(middleCanvas, text=line)
messageLabel.grid(column=1, row=messageRow)
Tried using a scrollbar, but the bar also goes out of screen and fills the slider.
| [
"This also happened to me with buttons.\nYou can fix it by defining a WIDTH variable and set it to the size you want. Then set the Label width to the WIDTH variable.\nFor example:\nWIDTH = 5\nmessagelabel = Label(middleCanvas, text=\"A very, very, very very, very long string. \", width=WIDTH)\nmessagelabel.grid(column=1, row=messageRow) \n\n"
] | [
1
] | [] | [] | [
"python",
"tkinter",
"tkinter_canvas"
] | stackoverflow_0074678075_python_tkinter_tkinter_canvas.txt |
Q:
Multiple URLs in multiple browsers in selenium (local) python
I have a test script that I want to be run for multiple URLs on multiple browsers (Chrome and Firefox) locally on my machine. Every browser has to open all the URLs for the test script. I have run the test script for multiple URLs for multiple browsers. I have the following code which do the task. Is there any better way to do this code? Thank you
import time
from selenium import webdriver
driver_array = [webdriver.Firefox(), webdriver.Chrome()]
sites = [
"http://www.github.com",
"https://tribune.com.pk"
]
for index, browser in enumerate(driver_array):
print(index, browser)
for index, site in enumerate(sites):
print(index,site)
browser.get(site)
time.sleep(5)
# localitems()
# sessionitems()
# def localitems() :
local_storage = browser.execute_script( \
"var ls = window.localStorage, items = {}; " \
"for (var i = 0, k; i < ls.length; ++i) " \
"items[k = ls.key(i)] = ls.getItem(k);"\
"return items; ")
print(local_storage)
# def sessionitems() :
session_storage = browser.execute_script( \
"var ls = window.sessionStorage, items = {}; " \
"for (var i = 0, k; i < ls.length; ++i) " \
"items[k = ls.key(i)] = ls.getItem(k);"\
"return items; ")
print(session_storage)
A:
Here is one possible way to improve the code.
import time
from selenium import webdriver
driver_array = [webdriver.Firefox(), webdriver.Chrome()]
sites = [ "http://www.github.com", "https://tribune.com.pk"]
def get_storage_items(driver, storage_type):
items = driver.execute_script(
f"var ls = window.{storage_type}, items = {{}}; "
f"for (var i = 0, k; i < ls.length; ++i) "
f"items[k = ls.key(i)] = ls.getItem(k);"
"return items; "
)
return items
for index, browser in enumerate(driver_array):
print(index, browser)
for index, site in enumerate(sites):
print(index, site)
browser.get(site)
time.sleep(5)
local_storage = get_storage_items(browser, "localStorage")
print(local_storage)
session_storage = get_storage_items(browser, "sessionStorage")
print(session_storage)
In this version of the code, the localitems() and sessionitems() functions have been removed and their logic has been combined into a single get_storage_items() function that takes the driver and the storage_type (either localStorage or sessionStorage) as arguments and returns the items in the specified storage. This function is called twice for each site, once for each type of storage, and the items are printed. This avoids duplication of code and makes the code easier to read and understand.
| Multiple URLs in multiple browsers in selenium (local) python | I have a test script that I want to be run for multiple URLs on multiple browsers (Chrome and Firefox) locally on my machine. Every browser has to open all the URLs for the test script. I have run the test script for multiple URLs for multiple browsers. I have the following code which do the task. Is there any better way to do this code? Thank you
import time
from selenium import webdriver
driver_array = [webdriver.Firefox(), webdriver.Chrome()]
sites = [
"http://www.github.com",
"https://tribune.com.pk"
]
for index, browser in enumerate(driver_array):
print(index, browser)
for index, site in enumerate(sites):
print(index,site)
browser.get(site)
time.sleep(5)
# localitems()
# sessionitems()
# def localitems() :
local_storage = browser.execute_script( \
"var ls = window.localStorage, items = {}; " \
"for (var i = 0, k; i < ls.length; ++i) " \
"items[k = ls.key(i)] = ls.getItem(k);"\
"return items; ")
print(local_storage)
# def sessionitems() :
session_storage = browser.execute_script( \
"var ls = window.sessionStorage, items = {}; " \
"for (var i = 0, k; i < ls.length; ++i) " \
"items[k = ls.key(i)] = ls.getItem(k);"\
"return items; ")
print(session_storage)
| [
"Here is one possible way to improve the code.\nimport time\nfrom selenium import webdriver\n\n\ndriver_array = [webdriver.Firefox(), webdriver.Chrome()]\nsites = [ \"http://www.github.com\", \"https://tribune.com.pk\"]\n\n\ndef get_storage_items(driver, storage_type):\n items = driver.execute_script(\n f\"var ls = window.{storage_type}, items = {{}}; \"\n f\"for (var i = 0, k; i < ls.length; ++i) \"\n f\"items[k = ls.key(i)] = ls.getItem(k);\"\n \"return items; \"\n )\n return items\n\n\nfor index, browser in enumerate(driver_array):\n print(index, browser)\n for index, site in enumerate(sites):\n print(index, site)\n browser.get(site)\n time.sleep(5)\n local_storage = get_storage_items(browser, \"localStorage\")\n print(local_storage)\n session_storage = get_storage_items(browser, \"sessionStorage\")\n print(session_storage)\n\nIn this version of the code, the localitems() and sessionitems() functions have been removed and their logic has been combined into a single get_storage_items() function that takes the driver and the storage_type (either localStorage or sessionStorage) as arguments and returns the items in the specified storage. This function is called twice for each site, once for each type of storage, and the items are printed. This avoids duplication of code and makes the code easier to read and understand.\n"
] | [
0
] | [] | [] | [
"browser_automation",
"cross_browser",
"python",
"selenium",
"selenium_webdriver"
] | stackoverflow_0074678060_browser_automation_cross_browser_python_selenium_selenium_webdriver.txt |
Q:
Is it possible to create muti thread in a flask server?
I am using flask and flask-restx try to create a protocol to get a specific string from another service. I am trying to figure out a way to run the function in server in different threads. Here's my code sample:
from flask_restx import Api,fields,Resource
from flask import Flask
app = Flask(__name__)
api = Api(app)
parent = api.model('Parent', {
'name': fields.String(get_answer(a,b)),
'class': fields.String(discriminator=True)
})
@api.route('/language')
class Language(Resource):
# @api.marshal_with(data_stream_request)
@api.marshal_with(parent)
@api.response(403, "Unauthorized")
def get(self):
return {"happy": "good"}
What I expect:
In Client side, first the server should run, i.e., we should able to make curl -i localhost:8080 work. Then when a specific condition is true, the client side should receive a GET request with the parent JSON data I have in server. However, if that condition is true, the GET request should not be able to return the correct result.
What I did:
One of the method I used is wrap up the decorator and Class Language(Resource) part in a different function and wrong that function in a different thread, and put that thread under a condition check. Not sure if that's the right way to do.I was seeing anyone said celery might be a good choice but not sure if that can work in flask-restx.
A:
I have the answer for you. to run a process in the background with flask, schedule it to run using another process using APScheduler. A very simple package that helps you schedule tasks to run functions at an interval, in your case one time at utcnow().
here is the link to Flask-APScheduler.
job = scheduler.add_job(myfunc, 'interval', minutes=2)
In your case use 'date' instead of 'interval' and specify run_date
job = scheduler.add_job(myfunc, 'date', run_date=datetime.utcnow())
here is the documentation:
User Guide
| Is it possible to create muti thread in a flask server? | I am using flask and flask-restx try to create a protocol to get a specific string from another service. I am trying to figure out a way to run the function in server in different threads. Here's my code sample:
from flask_restx import Api,fields,Resource
from flask import Flask
app = Flask(__name__)
api = Api(app)
parent = api.model('Parent', {
'name': fields.String(get_answer(a,b)),
'class': fields.String(discriminator=True)
})
@api.route('/language')
class Language(Resource):
# @api.marshal_with(data_stream_request)
@api.marshal_with(parent)
@api.response(403, "Unauthorized")
def get(self):
return {"happy": "good"}
What I expect:
In Client side, first the server should run, i.e., we should able to make curl -i localhost:8080 work. Then when a specific condition is true, the client side should receive a GET request with the parent JSON data I have in server. However, if that condition is true, the GET request should not be able to return the correct result.
What I did:
One of the method I used is wrap up the decorator and Class Language(Resource) part in a different function and wrong that function in a different thread, and put that thread under a condition check. Not sure if that's the right way to do.I was seeing anyone said celery might be a good choice but not sure if that can work in flask-restx.
| [
"I have the answer for you. to run a process in the background with flask, schedule it to run using another process using APScheduler. A very simple package that helps you schedule tasks to run functions at an interval, in your case one time at utcnow().\nhere is the link to Flask-APScheduler.\njob = scheduler.add_job(myfunc, 'interval', minutes=2)\n\nIn your case use 'date' instead of 'interval' and specify run_date\njob = scheduler.add_job(myfunc, 'date', run_date=datetime.utcnow())\n\nhere is the documentation:\nUser Guide\n"
] | [
0
] | [] | [] | [
"flask",
"flask_restx",
"multithreading",
"python",
"threadpoolexecutor"
] | stackoverflow_0074672488_flask_flask_restx_multithreading_python_threadpoolexecutor.txt |
Q:
How to launch a project correctly?
There is such a project https://github.com/WentianZhang-ML/FRT-PAD , I want to run it locally. At the very end it says that you can run as
python train_main.py \
--train_data [om/ci]
--test_data [ci/om]
--downstream [FE/FR/FA]
--graph_type [direct/dense]
I try to run this file in juputer, but I get SystemExit: 2
A:
Some simple options for running a python script in jupyter:
Option 1: Open a terminal in Jupyter, and run your Python scripts in the terminal like you would in your local terminal.
Option 2: Make a notebook, and use %run <name of script.py> as an entry in a cell. This is more fully featured than using !python <name of script.py> in a cell.
When the SystemExit: 2 error is raised, the Python interpreter will exit with a non-zero exit code (in this case, 2), indicating that the script was not executed successfully.
To troubleshoot this error, you can try the following steps:
Check the script for syntax errors. Make sure that the script is written in valid Python and that it follows the correct syntax for the version of Python that you are using.
Make sure that any external modules or libraries that the script depends on are installed and are in the Python interpreter's search path.
If you are using Jupyter, make sure that you are using the !python command to run the script within the notebook, rather than the python command.
| How to launch a project correctly? | There is such a project https://github.com/WentianZhang-ML/FRT-PAD , I want to run it locally. At the very end it says that you can run as
python train_main.py \
--train_data [om/ci]
--test_data [ci/om]
--downstream [FE/FR/FA]
--graph_type [direct/dense]
I try to run this file in juputer, but I get SystemExit: 2
| [
"Some simple options for running a python script in jupyter:\nOption 1: Open a terminal in Jupyter, and run your Python scripts in the terminal like you would in your local terminal.\nOption 2: Make a notebook, and use %run <name of script.py> as an entry in a cell. This is more fully featured than using !python <name of script.py> in a cell.\nWhen the SystemExit: 2 error is raised, the Python interpreter will exit with a non-zero exit code (in this case, 2), indicating that the script was not executed successfully.\nTo troubleshoot this error, you can try the following steps:\n\nCheck the script for syntax errors. Make sure that the script is written in valid Python and that it follows the correct syntax for the version of Python that you are using.\n\nMake sure that any external modules or libraries that the script depends on are installed and are in the Python interpreter's search path.\n\nIf you are using Jupyter, make sure that you are using the !python command to run the script within the notebook, rather than the python command.\n\n\n"
] | [
0
] | [] | [] | [
"machine_learning",
"python"
] | stackoverflow_0074678074_machine_learning_python.txt |
Q:
Cannot run python program in Vs code
you just know it by seeing the picture
I don't know what to do...
I tried many things from google tried putting the same file path to launcher.json but nothing worked even tried reinstalling the whole visual studio code
As asked adding launch.json file code:
launch.json code
A:
Have you tried using the generic configuration for running a currently open file?
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
},
| Cannot run python program in Vs code | you just know it by seeing the picture
I don't know what to do...
I tried many things from google tried putting the same file path to launcher.json but nothing worked even tried reinstalling the whole visual studio code
As asked adding launch.json file code:
launch.json code
| [
"Have you tried using the generic configuration for running a currently open file?\n \"configurations\": [\n {\n \"name\": \"Python: Current File\",\n \"type\": \"python\",\n \"request\": \"launch\",\n \"program\": \"${file}\",\n \"console\": \"integratedTerminal\"\n },\n\n"
] | [
0
] | [] | [] | [
"python",
"visual_studio_code"
] | stackoverflow_0074677714_python_visual_studio_code.txt |
Q:
Plotly timeline with objects
In the below example, I would like to group the elements of y axis by continent, and to display the name of the continent at the top of each group. I can't figure out in the layout where we can set it. the example come from this plotly page
import pandas as pd
import plotly.graph_objects as go
from plotly import data
df = data.gapminder()
df = df.loc[ (df.year.isin([1987, 2007]))]
countries = (
df.loc[ (df.year.isin([2007]))]
.sort_values(by=["pop"], ascending=True)["country"]
.unique()
)[5:-10]
data = {"x": [], "y": [], "colors": [], "years": []}
for country in countries:
data["x"].extend(
[
df.loc[(df.year == 1987) & (df.country == country)]["pop"].values[0],
df.loc[(df.year == 2007) & (df.country == country)]["pop"].values[0],
None,
]
)
data["y"].extend([country, country, None]),
data["colors"].extend(["cyan", "darkblue", "white"]),
data["years"].extend(["1987", "2007", None])
fig = go.Figure(
data=[
go.Scatter(
x=data["x"],
y=data["y"],
mode="lines",
marker=dict(
color="grey",
)),
go.Scatter(
x=data["x"],
y=data["y"],
text=data["years"],
mode="markers",
marker=dict(
color=data["colors"],
symbol=["square","circle","circle"]*10,
size=16
),
hovertemplate="""Country: %{y} <br> Population: %{x} <br> Year: %{text} <br><extra></extra>"""
)
]
)
A:
To show grouping by continent instead of the code you showed would require looping through the data structure from dictionary format to data frame. y-axis by continent by specifying a multi-index for the y-axis.
I have limited myself to the top 5 countries by continent because the large number of categorical variables on the y-axis creates a situation that is difficult to see for visualization. You can rewrite/not set here according to your needs. Furthermore, in terms of visualization, I have set the x-axis type to log format because the large discrepancies in the numbers make the visualization weaker. This is also something I added on my own and you can edit it yourself.
import pandas as pd
import plotly.graph_objects as go
from plotly import data
df = data.gapminder()
df = df.loc[(df.year.isin([1987, 2007]))]
# top5 by continent
countries = (df.loc[df.year.isin([2007])]
.groupby(['continent',], as_index=False, sort=[True])[['country','pop']].head()['country']
)
df = df[df['country'].isin(countries.tolist())]
fig = go.Figure()
for c in df['continent'].unique():
dff = df.query('continent == @c')
#print(dff)
for cc in dff['country'].unique():
dfc = dff.query('country == @cc')
fig.add_trace(go.Scatter(x=dfc['pop'].tolist(),
y=[dfc['continent'],dfc['country']],
mode='lines+markers',
marker=dict(
color='grey',
))
)
fig.add_trace(go.Scatter(x=dfc['pop'].tolist(),
y=[dfc['continent'],dfc['country']],
text=dfc["year"],
mode="markers",
marker=dict(
color=["cyan", "darkblue", "white"],
size=16,
))
)
fig.update_layout(autosize=False, height=800, width=800, showlegend=False)
fig.update_xaxes(type='log')
fig.show()
| Plotly timeline with objects | In the below example, I would like to group the elements of y axis by continent, and to display the name of the continent at the top of each group. I can't figure out in the layout where we can set it. the example come from this plotly page
import pandas as pd
import plotly.graph_objects as go
from plotly import data
df = data.gapminder()
df = df.loc[ (df.year.isin([1987, 2007]))]
countries = (
df.loc[ (df.year.isin([2007]))]
.sort_values(by=["pop"], ascending=True)["country"]
.unique()
)[5:-10]
data = {"x": [], "y": [], "colors": [], "years": []}
for country in countries:
data["x"].extend(
[
df.loc[(df.year == 1987) & (df.country == country)]["pop"].values[0],
df.loc[(df.year == 2007) & (df.country == country)]["pop"].values[0],
None,
]
)
data["y"].extend([country, country, None]),
data["colors"].extend(["cyan", "darkblue", "white"]),
data["years"].extend(["1987", "2007", None])
fig = go.Figure(
data=[
go.Scatter(
x=data["x"],
y=data["y"],
mode="lines",
marker=dict(
color="grey",
)),
go.Scatter(
x=data["x"],
y=data["y"],
text=data["years"],
mode="markers",
marker=dict(
color=data["colors"],
symbol=["square","circle","circle"]*10,
size=16
),
hovertemplate="""Country: %{y} <br> Population: %{x} <br> Year: %{text} <br><extra></extra>"""
)
]
)
| [
"To show grouping by continent instead of the code you showed would require looping through the data structure from dictionary format to data frame. y-axis by continent by specifying a multi-index for the y-axis.\nI have limited myself to the top 5 countries by continent because the large number of categorical variables on the y-axis creates a situation that is difficult to see for visualization. You can rewrite/not set here according to your needs. Furthermore, in terms of visualization, I have set the x-axis type to log format because the large discrepancies in the numbers make the visualization weaker. This is also something I added on my own and you can edit it yourself.\nimport pandas as pd\nimport plotly.graph_objects as go\nfrom plotly import data\n\ndf = data.gapminder()\ndf = df.loc[(df.year.isin([1987, 2007]))]\n\n# top5 by continent\ncountries = (df.loc[df.year.isin([2007])]\n .groupby(['continent',], as_index=False, sort=[True])[['country','pop']].head()['country']\n)\n\ndf = df[df['country'].isin(countries.tolist())]\n\nfig = go.Figure()\n\nfor c in df['continent'].unique():\n dff = df.query('continent == @c')\n #print(dff)\n for cc in dff['country'].unique():\n dfc = dff.query('country == @cc')\n fig.add_trace(go.Scatter(x=dfc['pop'].tolist(),\n y=[dfc['continent'],dfc['country']],\n mode='lines+markers',\n marker=dict(\n color='grey',\n ))\n )\n fig.add_trace(go.Scatter(x=dfc['pop'].tolist(),\n y=[dfc['continent'],dfc['country']],\n text=dfc[\"year\"],\n mode=\"markers\",\n marker=dict(\n color=[\"cyan\", \"darkblue\", \"white\"],\n size=16,\n ))\n )\n \nfig.update_layout(autosize=False, height=800, width=800, showlegend=False)\nfig.update_xaxes(type='log')\n\nfig.show()\n\n\n"
] | [
0
] | [] | [] | [
"plotly",
"python"
] | stackoverflow_0074677111_plotly_python.txt |
Q:
What does print()'s `flush` do?
There is a boolean optional argument to the print() function flush which defaults to False.
The documentation says it is to forcibly flush the stream.
I don't understand the concept of flushing. What is flushing here? What is flushing of stream?
A:
Normally output to a file or the console is buffered, with text output at least until you print a newline. The flush makes sure that any output that is buffered goes to the destination.
I do use it e.g. when I make a user prompt like Do you want to continue (Y/n):, before getting the input.
This can be simulated (on Ubuntu 12.4 using Python 2.7):
from __future__ import print_function
import sys
from time import sleep
fp = sys.stdout
print('Do you want to continue (Y/n): ', end='')
# fp.flush()
sleep(5)
If you run this, you will see that the prompt string does not show up until the sleep ends and the program exits. If you uncomment the line with flush, you will see the prompt and then have to wait 5 seconds for the program to finish
A:
There are a couple of things to understand here. One is the difference between buffered I/O and unbuffered I/O. The concept is fairly simple - for buffered I/O, there is an internal buffer which is kept. Only when that buffer is full (or some other event happens, such as it reaches a newline) is the output "flushed". With unbuffered I/O, whenever a call is made to output something, it will do this, 1 character at a time.
Most I/O functions fall into the buffered category, mainly for performance reasons: it's a lot faster to write chunks at a time (all I/O functions eventually get down to syscalls of some description, which are expensive.)
flush lets you manually choose when you want this internal buffer to be written - a call to flush will write any characters in the buffer. Generally, this isn't needed, because the stream will handle this itself. However, there may be situations when you want to make sure something is output before you continue - this is where you'd use a call to flush().
A:
Two perfect answers we have here,
Anthon made it very clear to understand, Basically, the print line technically does not run (print) until the next line has finished.
Technically the line does run it just stays unbuffered until the
next line has finished running.
This might cause a bug for some people who uses the sleep function after running a print function expecting to see it prints before the sleep function started.
So why am I adding another answer?
The Future Has Arrived And I Would Like To Take The Time And Update You With It:
from __future__ import print_function
First of all, I believe this was an inside joke meant to show an error: Future is not defined ^_^
I'm looking at PyCharm's documentation right now and it looks like they added a flush method built inside the print function itself, Take a look at this:
def print(self, *args, sep=' ', end='\n', file=None): # known special case of print
"""
print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
flush: whether to forcibly flush the stream.
"""
pass
So we might be able to use: (Not sure if the usage order of the parameters should be the same)
from __present__ import print_function
from time import sleep
print('Hello World', flush=True)
sleep(5)
Or This:
print('Hello World', file=sys.stdout , flush=True)
As Anthon said:
If you run this, you will see that the prompt string does not show up
until the sleep ends and the program exits. If you uncomment the line
with flush, you will see the prompt and then have to wait 5 seconds
for the program to finish
So let's just convert that to our current situation:
If you run this, you will see the prompt and then have to wait 5 seconds
for the program to finish, If you change the line
with flush to flush=False, you will see that the prompt string does not show up
until the sleep ends and the program exits.
A:
A practical example to understand print() with and without the flush parameter:
import time
for i in range(5):
print(i, end=" ", flush=True) # Print numbers as soon as they are generated
# print(i, end=" ", flush=False) # Print everything together at the end
time.sleep(0.5)
print("end")
You can comment/uncomment the print lines to check how this affects the way the output is produced.
This is similar to the accepted answer, just a bit simpler and for Python 3.
| What does print()'s `flush` do? | There is a boolean optional argument to the print() function flush which defaults to False.
The documentation says it is to forcibly flush the stream.
I don't understand the concept of flushing. What is flushing here? What is flushing of stream?
| [
"Normally output to a file or the console is buffered, with text output at least until you print a newline. The flush makes sure that any output that is buffered goes to the destination.\nI do use it e.g. when I make a user prompt like Do you want to continue (Y/n):, before getting the input.\nThis can be simulated (on Ubuntu 12.4 using Python 2.7):\nfrom __future__ import print_function\n\nimport sys\nfrom time import sleep\n\nfp = sys.stdout\nprint('Do you want to continue (Y/n): ', end='')\n# fp.flush()\nsleep(5)\n\nIf you run this, you will see that the prompt string does not show up until the sleep ends and the program exits. If you uncomment the line with flush, you will see the prompt and then have to wait 5 seconds for the program to finish\n",
"There are a couple of things to understand here. One is the difference between buffered I/O and unbuffered I/O. The concept is fairly simple - for buffered I/O, there is an internal buffer which is kept. Only when that buffer is full (or some other event happens, such as it reaches a newline) is the output \"flushed\". With unbuffered I/O, whenever a call is made to output something, it will do this, 1 character at a time.\nMost I/O functions fall into the buffered category, mainly for performance reasons: it's a lot faster to write chunks at a time (all I/O functions eventually get down to syscalls of some description, which are expensive.) \nflush lets you manually choose when you want this internal buffer to be written - a call to flush will write any characters in the buffer. Generally, this isn't needed, because the stream will handle this itself. However, there may be situations when you want to make sure something is output before you continue - this is where you'd use a call to flush().\n",
"Two perfect answers we have here,\nAnthon made it very clear to understand, Basically, the print line technically does not run (print) until the next line has finished.\n\nTechnically the line does run it just stays unbuffered until the\nnext line has finished running.\n\nThis might cause a bug for some people who uses the sleep function after running a print function expecting to see it prints before the sleep function started.\nSo why am I adding another answer?\nThe Future Has Arrived And I Would Like To Take The Time And Update You With It:\nfrom __future__ import print_function\n\nFirst of all, I believe this was an inside joke meant to show an error: Future is not defined ^_^\n\nI'm looking at PyCharm's documentation right now and it looks like they added a flush method built inside the print function itself, Take a look at this:\ndef print(self, *args, sep=' ', end='\\n', file=None): # known special case of print\n\"\"\"\nprint(value, ..., sep=' ', end='\\n', file=sys.stdout, flush=False)\n\nPrints the values to a stream, or to sys.stdout by default.\nOptional keyword arguments:\nfile: a file-like object (stream); defaults to the current sys.stdout.\nsep: string inserted between values, default a space.\nend: string appended after the last value, default a newline.\nflush: whether to forcibly flush the stream.\n\"\"\"\npass\n\n\n\nSo we might be able to use: (Not sure if the usage order of the parameters should be the same)\nfrom __present__ import print_function\n\nfrom time import sleep\n\nprint('Hello World', flush=True)\n\nsleep(5)\n\nOr This:\nprint('Hello World', file=sys.stdout , flush=True)\n\nAs Anthon said:\n\nIf you run this, you will see that the prompt string does not show up\nuntil the sleep ends and the program exits. If you uncomment the line\nwith flush, you will see the prompt and then have to wait 5 seconds\nfor the program to finish\n\nSo let's just convert that to our current situation:\nIf you run this, you will see the prompt and then have to wait 5 seconds\nfor the program to finish, If you change the line\nwith flush to flush=False, you will see that the prompt string does not show up\nuntil the sleep ends and the program exits.\n",
"A practical example to understand print() with and without the flush parameter:\nimport time\n\nfor i in range(5):\n print(i, end=\" \", flush=True) # Print numbers as soon as they are generated\n # print(i, end=\" \", flush=False) # Print everything together at the end\n time.sleep(0.5)\n\nprint(\"end\")\n\nYou can comment/uncomment the print lines to check how this affects the way the output is produced.\nThis is similar to the accepted answer, just a bit simpler and for Python 3.\n"
] | [
43,
40,
6,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0015608229_python_python_3.x.txt |
Q:
Changing multiple while statements using defined functions?
Python code where user inputs surname, forename that must be certain length and wont accept numeric values. I'm creating code for a website to get a user to input various questions like name, address, phone number etc. my code is working currently for each question, but every question is a while statement and I wanted to define functions for each instead (minimizing the repetition of while statements). Please see below 1. working while statement 2. def code I'm failing at creating, because it doesn't take the length or no numeric values into account. The format of my question below for parts 1. and 2. don't include the start "while" & "def" statement for some reason.
1.
first_name = "First Name:\t"
while first_name:
first_name = input("First Name:\t")
if len(first_name) < 15 and first_name.isalpha():
break
else:
print("Invalid entry. Please try again")
continue
second_name = "Second Name:\t"
while second_name:
second_name = input("Second Name:\t")
if len(second_name) < 15 and second_name.isalpha():
break
else:
print("Invalid entry. Please try again")
continue
def name(first):
while True:
if len(first) < 15 and first.isalpha():
break
else:
print('invalid')
continue
first = input("First Name:\t")
A:
You can modify the function like this:
def name_check(name):
while True:
if len(name) < 15 and name.isalpha():
break
else:
print('invalid')
name = input("First Name:\t")
continue
return name
result = name_check(input("First Name:\t"))
print(result)
| Changing multiple while statements using defined functions? | Python code where user inputs surname, forename that must be certain length and wont accept numeric values. I'm creating code for a website to get a user to input various questions like name, address, phone number etc. my code is working currently for each question, but every question is a while statement and I wanted to define functions for each instead (minimizing the repetition of while statements). Please see below 1. working while statement 2. def code I'm failing at creating, because it doesn't take the length or no numeric values into account. The format of my question below for parts 1. and 2. don't include the start "while" & "def" statement for some reason.
1.
first_name = "First Name:\t"
while first_name:
first_name = input("First Name:\t")
if len(first_name) < 15 and first_name.isalpha():
break
else:
print("Invalid entry. Please try again")
continue
second_name = "Second Name:\t"
while second_name:
second_name = input("Second Name:\t")
if len(second_name) < 15 and second_name.isalpha():
break
else:
print("Invalid entry. Please try again")
continue
def name(first):
while True:
if len(first) < 15 and first.isalpha():
break
else:
print('invalid')
continue
first = input("First Name:\t")
| [
"You can modify the function like this:\ndef name_check(name):\n while True:\n if len(name) < 15 and name.isalpha():\n break\n else:\n print('invalid')\n name = input(\"First Name:\\t\")\n continue\n return name\n\nresult = name_check(input(\"First Name:\\t\"))\nprint(result)\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x",
"user_defined_functions",
"while_loop"
] | stackoverflow_0074678022_python_python_3.x_user_defined_functions_while_loop.txt |
Q:
Have you ever get RuntimeError: await wasn't used with future?
trying to extract data from a website by using asyncio and aiohttp, and AWAIT problem occur in for loop function.
here my script :
async def get_page(session,x):
async with session.get(f'https://disclosure.bursamalaysia.com/FileAccess/viewHtml?e={x}') as r:
return await r.text()
async def get_all(session, urls):
tasks =[]
sem = asyncio.Semaphore(1)
count = 0
for x in urls:
count +=1
task = asyncio.create_task(get_page(session,x))
tasks.append(task)
print(count,'-ID-',x,'|', end=' ')
results = await asyncio.gather(*task)
return results
async def main(urls):
async with aiohttp.ClientSession() as session:
data = await get_all(session, urls)
return
if __name__ == '__main__':
urls = titlelink
results = asyncio.run(main(urls))
print(results)
for the error, this is what it return when the scraper break :
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-5ac99108678c> in <module>
22 if __name__ == '__main__':
23 urls = titlelink
---> 24 results = asyncio.run(main(urls))
25 print(results)
~\AppData\Local\Programs\Python\Python38\lib\site-packages\nest_asyncio.py in run(future, debug)
30 loop = asyncio.get_event_loop()
31 loop.set_debug(debug)
---> 32 return loop.run_until_complete(future)
33
34 if sys.version_info >= (3, 6, 0):
~\AppData\Local\Programs\Python\Python38\lib\site-packages\nest_asyncio.py in run_until_complete(self, future)
68 raise RuntimeError(
69 'Event loop stopped before Future completed.')
---> 70 return f.result()
71
72 def _run_once(self):
~\AppData\Local\Programs\Python\Python38\lib\asyncio\futures.py in result(self)
176 self.__log_traceback = False
177 if self._exception is not None:
--> 178 raise self._exception
179 return self._result
180
~\AppData\Local\Programs\Python\Python38\lib\asyncio\tasks.py in __step(***failed resolving arguments***)
278 # We use the `send` method directly, because coroutines
279 # don't have `__iter__` and `__next__` methods.
--> 280 result = coro.send(None)
281 else:
282 result = coro.throw(exc)
<ipython-input-3-5ac99108678c> in main(urls)
17 async def main(urls):
18 async with aiohttp.ClientSession() as session:
---> 19 data = await get_all(session, urls)
20 return
21
<ipython-input-3-5ac99108678c> in get_all(session, urls)
12 tasks.append(task)
13 print(count,'-ID-',x,'|', end=' ')
---> 14 results = await asyncio.gather(*task)
15 return results
16
~\AppData\Local\Programs\Python\Python38\lib\asyncio\futures.py in __await__(self)
260 yield self # This tells Task to wait for completion.
261 if not self.done():
--> 262 raise RuntimeError("await wasn't used with future")
263 return self.result() # May raise too.
264
RuntimeError: await wasn't used with future
is this error because of putting await inside the for loop function or it is because of the server problem? or maybe the way I wrote the script is wrong. Appreciate if any of you able to point me or guide me to the right direction
A:
You can use multiprocessing to scrape multiple link simultaneously(parallelly):
from multiprocessing import Pool
def scrape(url):
#Scraper script
p = Pool(10)
# This “10” means that 10 URLs will be processed at the same time.
p.map(scrape, list_of_all_urls)
p.terminate()
p.join()
Here we map function scrape with list_of_all_urls and Pool p will take care of executing each of them concurrently.This is similar to looping over list_of_all_urls in simple.py but here it is done concurrently. If number of URLs is 100 and we specify Pool(20), then it will take 5 iterations (100/20) and 20 URLs will be processed in one go.
Two things to note
The links are not executed in order. You can see order is 2,1,3… This is because of multiprocessing and time is saved by one process by not waiting for previous one to finish. This is called parallel execution.
This scrape very fast then normal. This difference grows very quickly when number of URLs increase which means that performance of multiprocessing script improves with large number of URLs.
You may visit here for more/detail information.
I believe this is same from previous question, I think you can use multiprocessing. I know this is not the right answer but you can use multiproces which is easy, and straightforward.
A:
await asyncio.gather(*task)
Should be:
await asyncio.gather(*tasks)
The exception actually comes from the *task. Not sure what this syntax is meant for, but it's certainly not what you intended:
>>> t = asyncio.Task(asyncio.sleep(10))
>>> (*t,)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: await wasn't used with future
| Have you ever get RuntimeError: await wasn't used with future? | trying to extract data from a website by using asyncio and aiohttp, and AWAIT problem occur in for loop function.
here my script :
async def get_page(session,x):
async with session.get(f'https://disclosure.bursamalaysia.com/FileAccess/viewHtml?e={x}') as r:
return await r.text()
async def get_all(session, urls):
tasks =[]
sem = asyncio.Semaphore(1)
count = 0
for x in urls:
count +=1
task = asyncio.create_task(get_page(session,x))
tasks.append(task)
print(count,'-ID-',x,'|', end=' ')
results = await asyncio.gather(*task)
return results
async def main(urls):
async with aiohttp.ClientSession() as session:
data = await get_all(session, urls)
return
if __name__ == '__main__':
urls = titlelink
results = asyncio.run(main(urls))
print(results)
for the error, this is what it return when the scraper break :
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-5ac99108678c> in <module>
22 if __name__ == '__main__':
23 urls = titlelink
---> 24 results = asyncio.run(main(urls))
25 print(results)
~\AppData\Local\Programs\Python\Python38\lib\site-packages\nest_asyncio.py in run(future, debug)
30 loop = asyncio.get_event_loop()
31 loop.set_debug(debug)
---> 32 return loop.run_until_complete(future)
33
34 if sys.version_info >= (3, 6, 0):
~\AppData\Local\Programs\Python\Python38\lib\site-packages\nest_asyncio.py in run_until_complete(self, future)
68 raise RuntimeError(
69 'Event loop stopped before Future completed.')
---> 70 return f.result()
71
72 def _run_once(self):
~\AppData\Local\Programs\Python\Python38\lib\asyncio\futures.py in result(self)
176 self.__log_traceback = False
177 if self._exception is not None:
--> 178 raise self._exception
179 return self._result
180
~\AppData\Local\Programs\Python\Python38\lib\asyncio\tasks.py in __step(***failed resolving arguments***)
278 # We use the `send` method directly, because coroutines
279 # don't have `__iter__` and `__next__` methods.
--> 280 result = coro.send(None)
281 else:
282 result = coro.throw(exc)
<ipython-input-3-5ac99108678c> in main(urls)
17 async def main(urls):
18 async with aiohttp.ClientSession() as session:
---> 19 data = await get_all(session, urls)
20 return
21
<ipython-input-3-5ac99108678c> in get_all(session, urls)
12 tasks.append(task)
13 print(count,'-ID-',x,'|', end=' ')
---> 14 results = await asyncio.gather(*task)
15 return results
16
~\AppData\Local\Programs\Python\Python38\lib\asyncio\futures.py in __await__(self)
260 yield self # This tells Task to wait for completion.
261 if not self.done():
--> 262 raise RuntimeError("await wasn't used with future")
263 return self.result() # May raise too.
264
RuntimeError: await wasn't used with future
is this error because of putting await inside the for loop function or it is because of the server problem? or maybe the way I wrote the script is wrong. Appreciate if any of you able to point me or guide me to the right direction
| [
"You can use multiprocessing to scrape multiple link simultaneously(parallelly):\nfrom multiprocessing import Pool\n \ndef scrape(url):\n #Scraper script\n\np = Pool(10)\n# This “10” means that 10 URLs will be processed at the same time.\np.map(scrape, list_of_all_urls)\np.terminate()\np.join()\n\n\nHere we map function scrape with list_of_all_urls and Pool p will take care of executing each of them concurrently.This is similar to looping over list_of_all_urls in simple.py but here it is done concurrently. If number of URLs is 100 and we specify Pool(20), then it will take 5 iterations (100/20) and 20 URLs will be processed in one go.\n\nTwo things to note\n\nThe links are not executed in order. You can see order is 2,1,3… This is because of multiprocessing and time is saved by one process by not waiting for previous one to finish. This is called parallel execution.\nThis scrape very fast then normal. This difference grows very quickly when number of URLs increase which means that performance of multiprocessing script improves with large number of URLs.\n\nYou may visit here for more/detail information.\nI believe this is same from previous question, I think you can use multiprocessing. I know this is not the right answer but you can use multiproces which is easy, and straightforward.\n",
"await asyncio.gather(*task)\nShould be:\nawait asyncio.gather(*tasks)\nThe exception actually comes from the *task. Not sure what this syntax is meant for, but it's certainly not what you intended:\n>>> t = asyncio.Task(asyncio.sleep(10))\n>>> (*t,)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nRuntimeError: await wasn't used with future\n\n"
] | [
0,
0
] | [] | [] | [
"async_await",
"asynchronous",
"python"
] | stackoverflow_0068563801_async_await_asynchronous_python.txt |
Q:
How to run two processes with dockerfile?
I need to run uvicorn server process and my python script (which is another process).
Since uvicorn start a process that doesn't end, the second command will not start. So i ask you if you know some workaround to overcome this problem.
I tried to do this command:
CMD cd Manager ; uvicorn ManagerBot:app --host 0.0.0.0 --port 8000 && python ManagerBot.py
also this:
CMD cd Manager ; uvicorn ManagerBot:app --host 0.0.0.0 --port 8000 ; python ManagerBot.py
But the script doesn't start (only the uvicorn server start)
I remind you that, the script, is another process that doesn't end so the "viceversa" will not work.
A:
Create a wrapper script, e.g. run.sh:
#!/bin/bash
# Start the first process
uvicorn ManagerBot:app --host 0.0.0.0 --port 8000 &
# Start the second process
python ManagerBot.py &
# Wait for any process to exit
wait -n
# Exit with status of process that exited first
exit $?
Then, in Dockerfile:
...
COPY run.sh /run.sh
RUN chmod +x /run.sh
ENTRYPOINT ["/run.sh"]
| How to run two processes with dockerfile? | I need to run uvicorn server process and my python script (which is another process).
Since uvicorn start a process that doesn't end, the second command will not start. So i ask you if you know some workaround to overcome this problem.
I tried to do this command:
CMD cd Manager ; uvicorn ManagerBot:app --host 0.0.0.0 --port 8000 && python ManagerBot.py
also this:
CMD cd Manager ; uvicorn ManagerBot:app --host 0.0.0.0 --port 8000 ; python ManagerBot.py
But the script doesn't start (only the uvicorn server start)
I remind you that, the script, is another process that doesn't end so the "viceversa" will not work.
| [
"Create a wrapper script, e.g. run.sh:\n#!/bin/bash\n\n# Start the first process\nuvicorn ManagerBot:app --host 0.0.0.0 --port 8000 &\n \n# Start the second process\npython ManagerBot.py &\n \n# Wait for any process to exit\nwait -n\n \n# Exit with status of process that exited first\nexit $?\n\nThen, in Dockerfile:\n... \nCOPY run.sh /run.sh\nRUN chmod +x /run.sh\nENTRYPOINT [\"/run.sh\"]\n\n"
] | [
1
] | [] | [] | [
"command",
"docker",
"dockerfile",
"python",
"server"
] | stackoverflow_0074678353_command_docker_dockerfile_python_server.txt |
Q:
Converting list of dictionary and dictionay data to a dataframe in python
I am trying to send a SOAP request and iteratively and the response captured for each iteration is as follows.
df = {'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '0BQU22', 'NVIC_MODEL': '0BQU', 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'}
{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '0BQT22', 'NVIC_MODEL': '0BQT', 'ModelName': 'FDIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'}
[{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '09GE22', 'NVIC_MODEL': '09GE', 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'}, {'@diffgr:id': 'Table2', '@msdata:rowOrder': '1', 'NVIC_CUR': '0BR222', 'NVIC_MODEL': '0BR2', 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'}]
[{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '09HR22', 'NVIC_MODEL': '09HR', 'ModelName': 'DIESEL TURBO 5 3198 cc DTFI 6 SP AUTOMATIC'}, {'@diffgr:id': 'Table2', '@msdata:rowOrder': '1', 'NVIC_CUR': '09HS22', 'NVIC_MODEL': '09HS', 'ModelName': 'DIESEL TURBO 5 3198 cc DTFI 6 SP MANUAL'}]
The SOAP API sometimes return dictionary data, and sometime list of dictionary.
My idea was to create a Dataframe of selected columns (NVIC_CUR, NVIC_MODEL, ModelName)
output dataframe
A:
You can do it by this three-step-
You can append each SOAP API response to either its dictionary or a list to a new python list, let's say its called - list_of_dict, and
Then iterate it and append it to a new list let's call it final_list_of_dict if it's a dictionary else iterate it again if a listlike below.
Finally, make a dataframe from the list of dictionaries.
Fullcode:
import pandas as pd
list_of_dict = [{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '0BQU22', 'NVIC_MODEL': '0BQU', 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'},
{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '0BQT22', 'NVIC_MODEL': '0BQT', 'ModelName': 'FDIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'},
[{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '09GE22', 'NVIC_MODEL': '09GE', 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'}, {'@diffgr:id': 'Table2', '@msdata:rowOrder': '1', 'NVIC_CUR': '0BR222', 'NVIC_MODEL': '0BR2', 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'}],
[{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '09HR22', 'NVIC_MODEL': '09HR', 'ModelName': 'DIESEL TURBO 5 3198 cc DTFI 6 SP AUTOMATIC'}, {'@diffgr:id': 'Table2', '@msdata:rowOrder': '1', 'NVIC_CUR': '09HS22', 'NVIC_MODEL': '09HS', 'ModelName': 'DIESEL TURBO 5 3198 cc DTFI 6 SP MANUAL'}]
]
final_list_of_dict = []
for item in list_of_dict:
if isinstance(item, list):
for item in item:
final_list_of_dict.append(item)
final_list_of_dict.append(item)
df = pd.DataFrame(final_list_of_dict, columns=['NVIC_CUR', 'NVIC_MODEL','ModelName'])
print(df)
Output:
NVIC_CUR NVIC_MODEL ModelName
0 0BQU22 0BQU DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC
1 0BQT22 0BQT FDIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOM...
2 09GE22 09GE DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC
3 0BR222 0BR2 DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC
4 0BR222 0BR2 DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC
5 09HR22 09HR DIESEL TURBO 5 3198 cc DTFI 6 SP AUTOMATIC
6 09HS22 09HS DIESEL TURBO 5 3198 cc DTFI 6 SP MANUAL
7 09HS22 09HS DIESEL TURBO 5 3198 cc DTFI 6 SP MANUAL
A:
Try using pandas library, it makes it quite easy:
import pandas as pd
df = {
'@diffgr:id': 'Table1',
'@msdata:rowOrder': '0',
'NVIC_CUR': '0BQU22',
'NVIC_MODEL': '0BQU',
'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'
}
if isinstance(df, list):
result = pd.DataFrame(df)
else:
result = pd.DataFrame.from_dict(df,
orient='index',
columns=['NVIC_CUR', 'NVIC_MODEL', 'ModelName']
)
| Converting list of dictionary and dictionay data to a dataframe in python | I am trying to send a SOAP request and iteratively and the response captured for each iteration is as follows.
df = {'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '0BQU22', 'NVIC_MODEL': '0BQU', 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'}
{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '0BQT22', 'NVIC_MODEL': '0BQT', 'ModelName': 'FDIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'}
[{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '09GE22', 'NVIC_MODEL': '09GE', 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'}, {'@diffgr:id': 'Table2', '@msdata:rowOrder': '1', 'NVIC_CUR': '0BR222', 'NVIC_MODEL': '0BR2', 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'}]
[{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '09HR22', 'NVIC_MODEL': '09HR', 'ModelName': 'DIESEL TURBO 5 3198 cc DTFI 6 SP AUTOMATIC'}, {'@diffgr:id': 'Table2', '@msdata:rowOrder': '1', 'NVIC_CUR': '09HS22', 'NVIC_MODEL': '09HS', 'ModelName': 'DIESEL TURBO 5 3198 cc DTFI 6 SP MANUAL'}]
The SOAP API sometimes return dictionary data, and sometime list of dictionary.
My idea was to create a Dataframe of selected columns (NVIC_CUR, NVIC_MODEL, ModelName)
output dataframe
| [
"You can do it by this three-step-\n\nYou can append each SOAP API response to either its dictionary or a list to a new python list, let's say its called - list_of_dict, and\n\nThen iterate it and append it to a new list let's call it final_list_of_dict if it's a dictionary else iterate it again if a listlike below.\n\nFinally, make a dataframe from the list of dictionaries.\n\n\nFullcode:\nimport pandas as pd\n\nlist_of_dict = [{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '0BQU22', 'NVIC_MODEL': '0BQU', 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'},\n{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '0BQT22', 'NVIC_MODEL': '0BQT', 'ModelName': 'FDIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'},\n[{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '09GE22', 'NVIC_MODEL': '09GE', 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'}, {'@diffgr:id': 'Table2', '@msdata:rowOrder': '1', 'NVIC_CUR': '0BR222', 'NVIC_MODEL': '0BR2', 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'}],\n[{'@diffgr:id': 'Table1', '@msdata:rowOrder': '0', 'NVIC_CUR': '09HR22', 'NVIC_MODEL': '09HR', 'ModelName': 'DIESEL TURBO 5 3198 cc DTFI 6 SP AUTOMATIC'}, {'@diffgr:id': 'Table2', '@msdata:rowOrder': '1', 'NVIC_CUR': '09HS22', 'NVIC_MODEL': '09HS', 'ModelName': 'DIESEL TURBO 5 3198 cc DTFI 6 SP MANUAL'}]\n]\n\nfinal_list_of_dict = []\nfor item in list_of_dict:\n if isinstance(item, list):\n for item in item:\n final_list_of_dict.append(item)\n final_list_of_dict.append(item) \ndf = pd.DataFrame(final_list_of_dict, columns=['NVIC_CUR', 'NVIC_MODEL','ModelName'])\nprint(df)\n\nOutput:\n NVIC_CUR NVIC_MODEL ModelName\n0 0BQU22 0BQU DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC\n1 0BQT22 0BQT FDIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOM...\n2 09GE22 09GE DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC\n3 0BR222 0BR2 DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC\n4 0BR222 0BR2 DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC\n5 09HR22 09HR DIESEL TURBO 5 3198 cc DTFI 6 SP AUTOMATIC\n6 09HS22 09HS DIESEL TURBO 5 3198 cc DTFI 6 SP MANUAL\n7 09HS22 09HS DIESEL TURBO 5 3198 cc DTFI 6 SP MANUAL\n\n",
"Try using pandas library, it makes it quite easy:\nimport pandas as pd\n\n\ndf = {\n '@diffgr:id': 'Table1', \n '@msdata:rowOrder': '0', \n 'NVIC_CUR': '0BQU22', \n 'NVIC_MODEL': '0BQU', \n 'ModelName': 'DIESEL TWIN TURBO 4 1996 cc BTCDI 10 SP AUTOMATIC'\n}\n\nif isinstance(df, list):\n result = pd.DataFrame(df)\nelse:\n result = pd.DataFrame.from_dict(df, \n orient='index',\n columns=['NVIC_CUR', 'NVIC_MODEL', 'ModelName']\n )\n\n"
] | [
0,
0
] | [] | [] | [
"dictionary",
"list",
"python"
] | stackoverflow_0074678229_dictionary_list_python.txt |
Q:
Error When Trying to Calculate FLOPS for Complex TF2 Keras Models
I want to calculate the FLOPS in the ML models used. I get an error when I tried to calculate for much complex models.
I get this Error for Efficientnet Models:
ValueError: Unknown layer: FixedDropout. Please ensure this object is passed to the `custom_objects` argument. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.
The function to calculate the FLOPS:
1)
def get_flops(model, batch_size=None):
if batch_size is None:
batch_size = 1
real_model = tf.function(model).get_concrete_function(tf.TensorSpec([batch_size] + model.inputs[0].shape[1:], model.inputs[0].dtype))
frozen_func, graph_def = convert_variables_to_constants_v2_as_graph(real_model)
run_meta = tf.compat.v1.RunMetadata()
opts = tf.compat.v1.profiler.ProfileOptionBuilder.float_operation()
flops = tf.compat.v1.profiler.profile(graph=frozen_func.graph,
run_meta=run_meta, cmd='op', options=opts)
return flops.total_float_ops
or
2)
def get_flops(model_h5_path):
session = tf.compat.v1.Session()
graph = tf.compat.v1.get_default_graph()
with graph.as_default():
with session.as_default():
model = tf.keras.models.load_model(model_h5_path)
run_meta = tf.compat.v1.RunMetadata()
opts = tf.compat.v1.profiler.ProfileOptionBuilder.float_operation()
# We use the Keras session graph in the call to the profiler.
flops = tf.compat.v1.profiler.profile(graph=graph,
run_meta=run_meta, cmd='op', options=opts)
return flops.total_float_ops
On the contrary, I am able to calculate the FLOPS for models like Resnets, just that it is not possible for bit complex models. How can I mitigate the issue?
A:
In the first function, you are using the convert_variables_to_constants_v2_as_graph function from TensorFlow to convert the model to a graph. This function has a custom_objects parameter that you can use to pass the custom layers that the model uses. You can add the FixedDropout layer to this parameter to fix the error you are seeing.
Here is an example of how you could use the custom_objects parameter to fix the error:
import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2_as_graph
# Define the custom layer class
class FixedDropout(tf.keras.layers.Dropout):
def call(self, inputs, training=None):
return inputs
def get_flops(model, batch_size=None):
if batch_size is None:
batch_size = 1
real_model = tf.function(model).get_concrete_function(tf.TensorSpec([batch_size] + model.inputs[0].shape[1:], model.inputs[0].dtype))
# Pass the custom layer class to the custom_objects parameter
frozen_func, graph_def = convert_variables_to_constants_v2_as_graph(real_model, custom_objects={'FixedDropout': FixedDropout})
run_meta = tf.compat.v1.RunMetadata()
opts = tf.compat.v1.profiler.ProfileOptionBuilder.float_operation()
flops = tf.compat.v1.profiler.profile(graph=frozen_func.graph,
run_meta=run_meta, cmd='op', options=opts)
return flops.total_float_ops
| Error When Trying to Calculate FLOPS for Complex TF2 Keras Models | I want to calculate the FLOPS in the ML models used. I get an error when I tried to calculate for much complex models.
I get this Error for Efficientnet Models:
ValueError: Unknown layer: FixedDropout. Please ensure this object is passed to the `custom_objects` argument. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.
The function to calculate the FLOPS:
1)
def get_flops(model, batch_size=None):
if batch_size is None:
batch_size = 1
real_model = tf.function(model).get_concrete_function(tf.TensorSpec([batch_size] + model.inputs[0].shape[1:], model.inputs[0].dtype))
frozen_func, graph_def = convert_variables_to_constants_v2_as_graph(real_model)
run_meta = tf.compat.v1.RunMetadata()
opts = tf.compat.v1.profiler.ProfileOptionBuilder.float_operation()
flops = tf.compat.v1.profiler.profile(graph=frozen_func.graph,
run_meta=run_meta, cmd='op', options=opts)
return flops.total_float_ops
or
2)
def get_flops(model_h5_path):
session = tf.compat.v1.Session()
graph = tf.compat.v1.get_default_graph()
with graph.as_default():
with session.as_default():
model = tf.keras.models.load_model(model_h5_path)
run_meta = tf.compat.v1.RunMetadata()
opts = tf.compat.v1.profiler.ProfileOptionBuilder.float_operation()
# We use the Keras session graph in the call to the profiler.
flops = tf.compat.v1.profiler.profile(graph=graph,
run_meta=run_meta, cmd='op', options=opts)
return flops.total_float_ops
On the contrary, I am able to calculate the FLOPS for models like Resnets, just that it is not possible for bit complex models. How can I mitigate the issue?
| [
"In the first function, you are using the convert_variables_to_constants_v2_as_graph function from TensorFlow to convert the model to a graph. This function has a custom_objects parameter that you can use to pass the custom layers that the model uses. You can add the FixedDropout layer to this parameter to fix the error you are seeing.\nHere is an example of how you could use the custom_objects parameter to fix the error:\nimport tensorflow as tf\nfrom tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2_as_graph\n\n# Define the custom layer class\nclass FixedDropout(tf.keras.layers.Dropout):\n def call(self, inputs, training=None):\n return inputs\n\ndef get_flops(model, batch_size=None):\n if batch_size is None:\n batch_size = 1\n\n real_model = tf.function(model).get_concrete_function(tf.TensorSpec([batch_size] + model.inputs[0].shape[1:], model.inputs[0].dtype))\n\n # Pass the custom layer class to the custom_objects parameter\n frozen_func, graph_def = convert_variables_to_constants_v2_as_graph(real_model, custom_objects={'FixedDropout': FixedDropout})\n\n run_meta = tf.compat.v1.RunMetadata()\n opts = tf.compat.v1.profiler.ProfileOptionBuilder.float_operation()\n flops = tf.compat.v1.profiler.profile(graph=frozen_func.graph,\n run_meta=run_meta, cmd='op', options=opts)\n return flops.total_float_ops\n\n"
] | [
0
] | [] | [] | [
"computer_vision",
"keras",
"machine_learning",
"python",
"tensorflow"
] | stackoverflow_0074675233_computer_vision_keras_machine_learning_python_tensorflow.txt |
Q:
Unable to install discord py with pip
i have python 3.11 downloaded, and i installed pip with it.
however, i can't install discord py with
py -3 -m pip install -U discord.py
i've tried a few other ways, still didn't work.
in the end it says:
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for yarl
Failed to build multidict yarl
ERROR: Could not build wheels for multidict, yarl, which is required to install pyproject.toml-based projects
there are a few other errors throughout the process.
A:
Hmmm, it seems it might be a problem due to dependencies to yarl and multidict (happens). I've had the same problem with itertools, and even opencv taking extremely long to build with a non-upgraded pip version!
Have you tried upgrading pip? Same problem with those libraries' dependencies.
pip3 install --upgrade pip
A:
If pip direct installation doesn't work, try cloning the git repo:
$ pip install git+https://github.com/Rapptz/discord.py
A:
You can try pip install discord.py
A:
You could also try pip install discord
| Unable to install discord py with pip | i have python 3.11 downloaded, and i installed pip with it.
however, i can't install discord py with
py -3 -m pip install -U discord.py
i've tried a few other ways, still didn't work.
in the end it says:
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for yarl
Failed to build multidict yarl
ERROR: Could not build wheels for multidict, yarl, which is required to install pyproject.toml-based projects
there are a few other errors throughout the process.
| [
"Hmmm, it seems it might be a problem due to dependencies to yarl and multidict (happens). I've had the same problem with itertools, and even opencv taking extremely long to build with a non-upgraded pip version!\nHave you tried upgrading pip? Same problem with those libraries' dependencies.\npip3 install --upgrade pip\n\n",
"If pip direct installation doesn't work, try cloning the git repo:\n$ pip install git+https://github.com/Rapptz/discord.py\n\n",
"You can try pip install discord.py\n",
"You could also try pip install discord\n"
] | [
1,
1,
0,
0
] | [] | [] | [
"cmd",
"discord",
"discord.py",
"installation",
"python"
] | stackoverflow_0074617360_cmd_discord_discord.py_installation_python.txt |
Q:
Imported module not found in PyInstaller
I'm working in Windows, using PyInstaller to package a python file. But some error is occuring:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "D:\Useful Apps\pyinstaller-2.0\PyInstaller\loader\iu.py", line 386, in importHook
mod = _self_doimport(nm, ctx, fqname)
File "D:\Useful Apps\pyinstaller-2.0\PyInstaller\loader\iu.py", line 480, in doimport
exec co in mod.__dict__
File "D:\Useful Apps\pyinstaller-2.0\server\build\pyi.win32\server\out00-PYZ.pyz\SocketServer", line 132, in <module>
File "D:\Useful Apps\pyinstaller-2.0\PyInstaller\loader\iu.py", line 386, in importHook
mod = _self_doimport(nm, ctx, fqname)
File "D:\Useful Apps\pyinstaller-2.0\PyInstaller\loader\iu.py", line 480, in doimport
exec co in mod.__dict__
File "D:\Useful Apps\pyinstaller-2.0\server\build\pyi.win32\server\out00-PYZ.pyz\socket", line 47, in <module>
File "D:\Useful Apps\pyinstaller-2.0\PyInstaller\loader\iu.py", line 409, in importHook
raise ImportError("No module named %s" % fqname)
ImportError: No module named _socket
I know that _socket is in path C:\Python27\libs\_socket.lib, but how can let the generated EXE find that file?
A:
If you are using virtualenv you should use the "-p" or "--path='D:...'" option. Like this:
pyinstaller.exe --onefile --paths=D:\env\Lib\site-packages .\foo.py
What this does is generates foo.spec file with this pathex path
A:
This sounds like a job for hidden imports (only available in the latest builds).
From the docs
a = Analysis(['myscript.py'],
hiddenimports = ['_socket'],
<and everything else>)
A:
In my case, I had to delete all folders and files related to pyinstaller in my directory, i.e. __pycache__, build, dist, and *.spec. I re-ran the build and the exe worked.
A:
In my case I was trying to import a folder that I created, and ended up here. I solved that problem by removing __init__.py from the main folder, keeping the __init__.py in the subfolders that I was importing.
A:
You can add the path to your application spec file.
In the Analysis object you can specify pathex=['C:\Python27\libs\', 'C:\Python27\Lib\site-packages'], and any other path ...
Note that if the path is not found there is no problem ... I have paths from linux as well in there.
A:
If you are using an virtual environment, then problem is because of environment.
SOLUTION
Just activate the environment and run the pyinstaller command. For example, If you are using environment of pipenv then run commands in following order.
pipenv shell # To activate environment
pyintaller --onefile youscript.py # Command to generate executable
A:
just delete the '__pycache__' directory then run your exe file again. It worked out for me
A:
None of the above answers worked for me, but I did get it to work. I was using openpyxl and it required jdcal in the datetime.py module. None of the hidden imports or any of those methods helped, running the exe would still say jdcal not found. The work-around that I used was to just copy the few functions from jdcal directly into the datetime.py in the openpyxl code. Then ran
pyinstaller -F program.py
and it worked!
A:
Had similar issues. Here's my fix for PyQt5, cffi, python 3.4.3:
This fixes the 'sip' not found error and the '_cffi_backend' one if that comes up:
# -*- mode: python -*-
block_cipher = None
a = Analysis(['LightShowApp.py'],
pathex=['c:\\MyProjects\\light-show-editor-36',
'c:\\Python34\\libs\\', 'c:\\Python34\\Lib\\site-packages'],
binaries=None,
datas=None,
hiddenimports=['sip', 'cffi'],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
name='LightShowApp',
debug=False,
strip=False,
upx=True,
console=True )
Look at 'pathex' and 'hiddenimports' above. Those are the only changes from default generated. Build exe with:
pyinstaller LightShowApp.spec -F
I ran that outside of venv or pip-win - whateverTF that crap is for!
A:
The executor does not know the location of the library, "C:\Python27\Lib\site-packages" etc. Thus, pyinstaller binds the module locations when creating the executable. Therefore, you need to import all the modules, you have used into your program.
Import the "_socket" module in your main file and recompile using pyinstaller.
I would probably work.
Note: But the versions of the modules installed in your system and used in the program must be compatible.
A:
Another "In my case" post.
pypdfium2 (an import in my file that I want to convert to an .exe) has a .dll that it calls called pdfium. pyinstaller doesn't import that .dll when you go to build the .exe by default.
Fix:
I think you can do the option --collect-all pypdfium2 ,but at least for me --add-data "C:\Program Files\Python39\Lib\site-packages\pypdfium2\pdfium.dll";. (The "." at the end is intentional and needed!) got the job done.
A:
I found out that if you make the setup.py for your code and run python setup.py install and then python setup.py build the pyinstaller will be able to find your packages. you can use the find_packages function from setuptools in your setup.py file so it automatically includes everything.
| Imported module not found in PyInstaller | I'm working in Windows, using PyInstaller to package a python file. But some error is occuring:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "D:\Useful Apps\pyinstaller-2.0\PyInstaller\loader\iu.py", line 386, in importHook
mod = _self_doimport(nm, ctx, fqname)
File "D:\Useful Apps\pyinstaller-2.0\PyInstaller\loader\iu.py", line 480, in doimport
exec co in mod.__dict__
File "D:\Useful Apps\pyinstaller-2.0\server\build\pyi.win32\server\out00-PYZ.pyz\SocketServer", line 132, in <module>
File "D:\Useful Apps\pyinstaller-2.0\PyInstaller\loader\iu.py", line 386, in importHook
mod = _self_doimport(nm, ctx, fqname)
File "D:\Useful Apps\pyinstaller-2.0\PyInstaller\loader\iu.py", line 480, in doimport
exec co in mod.__dict__
File "D:\Useful Apps\pyinstaller-2.0\server\build\pyi.win32\server\out00-PYZ.pyz\socket", line 47, in <module>
File "D:\Useful Apps\pyinstaller-2.0\PyInstaller\loader\iu.py", line 409, in importHook
raise ImportError("No module named %s" % fqname)
ImportError: No module named _socket
I know that _socket is in path C:\Python27\libs\_socket.lib, but how can let the generated EXE find that file?
| [
"If you are using virtualenv you should use the \"-p\" or \"--path='D:...'\" option. Like this:\npyinstaller.exe --onefile --paths=D:\\env\\Lib\\site-packages .\\foo.py\n\nWhat this does is generates foo.spec file with this pathex path\n",
"This sounds like a job for hidden imports (only available in the latest builds). \nFrom the docs\na = Analysis(['myscript.py'], \n hiddenimports = ['_socket'], \n <and everything else>)\n\n",
"In my case, I had to delete all folders and files related to pyinstaller in my directory, i.e. __pycache__, build, dist, and *.spec. I re-ran the build and the exe worked.\n",
"In my case I was trying to import a folder that I created, and ended up here. I solved that problem by removing __init__.py from the main folder, keeping the __init__.py in the subfolders that I was importing.\n",
"You can add the path to your application spec file.\nIn the Analysis object you can specify pathex=['C:\\Python27\\libs\\', 'C:\\Python27\\Lib\\site-packages'], and any other path ...\nNote that if the path is not found there is no problem ... I have paths from linux as well in there.\n",
"If you are using an virtual environment, then problem is because of environment.\nSOLUTION\nJust activate the environment and run the pyinstaller command. For example, If you are using environment of pipenv then run commands in following order.\npipenv shell # To activate environment\n\npyintaller --onefile youscript.py # Command to generate executable \n\n",
"just delete the '__pycache__' directory then run your exe file again. It worked out for me\n",
"None of the above answers worked for me, but I did get it to work. I was using openpyxl and it required jdcal in the datetime.py module. None of the hidden imports or any of those methods helped, running the exe would still say jdcal not found. The work-around that I used was to just copy the few functions from jdcal directly into the datetime.py in the openpyxl code. Then ran \npyinstaller -F program.py\nand it worked!\n",
"Had similar issues. Here's my fix for PyQt5, cffi, python 3.4.3:\nThis fixes the 'sip' not found error and the '_cffi_backend' one if that comes up:\n# -*- mode: python -*-\n\nblock_cipher = None\n\n\na = Analysis(['LightShowApp.py'],\n pathex=['c:\\\\MyProjects\\\\light-show-editor-36',\n 'c:\\\\Python34\\\\libs\\\\', 'c:\\\\Python34\\\\Lib\\\\site-packages'],\n binaries=None,\n datas=None,\n hiddenimports=['sip', 'cffi'],\n hookspath=[],\n runtime_hooks=[],\n excludes=[],\n win_no_prefer_redirects=False,\n win_private_assemblies=False,\n cipher=block_cipher)\npyz = PYZ(a.pure, a.zipped_data,\n cipher=block_cipher)\nexe = EXE(pyz,\n a.scripts,\n a.binaries,\n a.zipfiles,\n a.datas,\n name='LightShowApp',\n debug=False,\n strip=False,\n upx=True,\n console=True )\n\nLook at 'pathex' and 'hiddenimports' above. Those are the only changes from default generated. Build exe with:\npyinstaller LightShowApp.spec -F\nI ran that outside of venv or pip-win - whateverTF that crap is for!\n",
"The executor does not know the location of the library, \"C:\\Python27\\Lib\\site-packages\" etc. Thus, pyinstaller binds the module locations when creating the executable. Therefore, you need to import all the modules, you have used into your program.\nImport the \"_socket\" module in your main file and recompile using pyinstaller.\nI would probably work.\nNote: But the versions of the modules installed in your system and used in the program must be compatible.\n",
"Another \"In my case\" post.\npypdfium2 (an import in my file that I want to convert to an .exe) has a .dll that it calls called pdfium. pyinstaller doesn't import that .dll when you go to build the .exe by default.\nFix:\nI think you can do the option --collect-all pypdfium2 ,but at least for me --add-data \"C:\\Program Files\\Python39\\Lib\\site-packages\\pypdfium2\\pdfium.dll\";. (The \".\" at the end is intentional and needed!) got the job done.\n",
"I found out that if you make the setup.py for your code and run python setup.py install and then python setup.py build the pyinstaller will be able to find your packages. you can use the find_packages function from setuptools in your setup.py file so it automatically includes everything.\n"
] | [
19,
4,
3,
3,
2,
1,
1,
0,
0,
0,
0,
0
] | [] | [] | [
"exe",
"pyinstaller",
"python",
"sockets",
"windows"
] | stackoverflow_0015114695_exe_pyinstaller_python_sockets_windows.txt |
Q:
How to extract certain letters from a string using Python
I have a string 'A1T1730'
From this I need to extract the second letter and the last four letters. For example, from 'A1T1730' I need to extract '1' and '1730'. I'm not sure how to do this in Python.
I have the following right now which extracts every character from the string separately so can someone please help me update it as per the above need.
list = ['A1T1730']
for letter in list[0]:
print letter
Which gives me the result of A, 1, T, 1, 7, 3, 0
A:
my_string = "A1T1730"
my_string = my_string[1] + my_string[-4:]
print my_string
Output
11730
If you want to extract them to different variables, you can just do
first, last = my_string[1], my_string[-4:]
print first, last
Output
1 1730
A:
Using filter with str.isdigit (as unbound method form):
>>> filter(str.isdigit, 'A1T1730')
'11730'
>>> ''.join(filter(str.isdigit, 'A1T1730')) # In Python 3.x
'11730'
If you want to get numbers separated, use regular expression (See re.findall):
>>> import re
>>> re.findall(r'\d+', 'A1T1730')
['1', '1730']
Use thefourtheye's solution if the positions of digits are fixed.
BTW, don't use list as a variable name. It shadows builtin list function.
A:
You can use the function isdigit(). If that character is a digit it returns true and otherwise returns false:
list = ['A1T1730']
for letter in list[0]:
if letter.isdigit() == True:
print letter, #The coma is used for print in the same line
I hope this useful.
A:
Well you could do like this
_2nd = lsit[0][1]
# last 4 characters
numbers = list[0][-4:]
| How to extract certain letters from a string using Python | I have a string 'A1T1730'
From this I need to extract the second letter and the last four letters. For example, from 'A1T1730' I need to extract '1' and '1730'. I'm not sure how to do this in Python.
I have the following right now which extracts every character from the string separately so can someone please help me update it as per the above need.
list = ['A1T1730']
for letter in list[0]:
print letter
Which gives me the result of A, 1, T, 1, 7, 3, 0
| [
"my_string = \"A1T1730\"\nmy_string = my_string[1] + my_string[-4:]\nprint my_string\n\nOutput\n11730\n\nIf you want to extract them to different variables, you can just do\nfirst, last = my_string[1], my_string[-4:]\nprint first, last\n\nOutput\n1 1730\n\n",
"Using filter with str.isdigit (as unbound method form):\n>>> filter(str.isdigit, 'A1T1730')\n'11730'\n>>> ''.join(filter(str.isdigit, 'A1T1730')) # In Python 3.x\n'11730'\n\nIf you want to get numbers separated, use regular expression (See re.findall):\n>>> import re\n>>> re.findall(r'\\d+', 'A1T1730')\n['1', '1730']\n\nUse thefourtheye's solution if the positions of digits are fixed.\n\nBTW, don't use list as a variable name. It shadows builtin list function.\n",
"You can use the function isdigit(). If that character is a digit it returns true and otherwise returns false:\nlist = ['A1T1730'] \nfor letter in list[0]:\n if letter.isdigit() == True:\n print letter, #The coma is used for print in the same line \n\nI hope this useful.\n",
"Well you could do like this\n_2nd = lsit[0][1]\n\n# last 4 characters\nnumbers = list[0][-4:]\n\n\n"
] | [
5,
4,
0,
0
] | [] | [] | [
"list",
"python"
] | stackoverflow_0021187124_list_python.txt |
Q:
Pandas: sort_index - help understanding 'key' argument
I am trying to sort a complex index (weird strings, with a custom order). I originally tried to do this, but its messing up the index (because its overwriting, not actually sorting)
df.index = list(sorted(df.index, key=Delta_Sorter.sort)) # <--Delta_Sorter.sort is a classmethod
Instead, I should probably be using Pandas.DataFrame.sort_index(), and pass key = Delta_Sorter.sort.
I was hoping someone could please help me understand the key argument though. From the docs:
key: callable, optional
If not None, apply the key function to the index values before sorting. This is similar to the key argument in the builtin sorted() function, with the notable difference that this key function should be vectorized. It should expect an Index and return an Index of the same shape. For MultiIndex inputs, the key is applied per level.
In particular, I dont know what it means that it should be vectorized. The docs don't have an example...
EDIT
I tried using numpy.vectorize(Delta_Sorter.sort), but it raises:
ValueError: User-provided key function must not change the shape of
the array.
class Delta_Sorter():
@classmethod
def sort(cls, x): # x = index value from the DataFrame
level_1 = cls._underlying_sort(x) #<- returns int
level_2 = cls._string_tenor_sorter(x) #<- returns int
return (level_1, level_2) # <-- uses a tuple to create sort 'levels'
Passing the df.index directly into the np.vectorize(Delta_Sorter.sort)(df.index) returns:
(array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0]),
array([ 10, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190,
20, 200, 30, 40, 50, 60, 70, 80, 90, 2, 5,
10, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190,
20, 200, 30, 40, 50, 60, 70, 80, 90, 400, 480,
600, 800, 1000, 1200, 240, 280, 320, 360]))
| Pandas: sort_index - help understanding 'key' argument | I am trying to sort a complex index (weird strings, with a custom order). I originally tried to do this, but its messing up the index (because its overwriting, not actually sorting)
df.index = list(sorted(df.index, key=Delta_Sorter.sort)) # <--Delta_Sorter.sort is a classmethod
Instead, I should probably be using Pandas.DataFrame.sort_index(), and pass key = Delta_Sorter.sort.
I was hoping someone could please help me understand the key argument though. From the docs:
key: callable, optional
If not None, apply the key function to the index values before sorting. This is similar to the key argument in the builtin sorted() function, with the notable difference that this key function should be vectorized. It should expect an Index and return an Index of the same shape. For MultiIndex inputs, the key is applied per level.
In particular, I dont know what it means that it should be vectorized. The docs don't have an example...
EDIT
I tried using numpy.vectorize(Delta_Sorter.sort), but it raises:
ValueError: User-provided key function must not change the shape of
the array.
class Delta_Sorter():
@classmethod
def sort(cls, x): # x = index value from the DataFrame
level_1 = cls._underlying_sort(x) #<- returns int
level_2 = cls._string_tenor_sorter(x) #<- returns int
return (level_1, level_2) # <-- uses a tuple to create sort 'levels'
Passing the df.index directly into the np.vectorize(Delta_Sorter.sort)(df.index) returns:
(array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0]),
array([ 10, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190,
20, 200, 30, 40, 50, 60, 70, 80, 90, 2, 5,
10, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190,
20, 200, 30, 40, 50, 60, 70, 80, 90, 400, 480,
600, 800, 1000, 1200, 240, 280, 320, 360]))
| [] | [] | [
"Syntax: DataFrame.sort_index(axis=0, level=None, ascending=True, inplace=False, kind=’quicksort’, na_position=’last’, sort_remaining=True, by=None)\nParameters :\naxis : index, columns to direct sorting\nlevel : if not None, sort on values in specified index level(s)\nascending : Sort ascending vs. descending\ninplace : if True, perform operation in-place\nkind : {‘quicksort’, ‘mergesort’, ‘heapsort’}, default ‘quicksort’. Choice of sorting algorithm. See also ndarray.np.sort for more information. mergesort is the only stable algorithm. For DataFrames, this option is only applied when sorting on a single column or label.\nna_position : [{‘first’, ‘last’}, default ‘last’] First puts NaNs at the beginning, last puts NaNs at the end. Not implemented for MultiIndex.\nsort_remaining : If true and sorting by level and index is multilevel, sort by other levels too (in order) after sorting by specified level\nReturn : sorted_obj : DataFrame\n"
] | [
-2
] | [
"pandas",
"python"
] | stackoverflow_0074678401_pandas_python.txt |
Q:
TypeError:run() missing 1 required positional argument: 'self'
I wrote a simple program in pycharm on Windows, then it ran. In order to get the apk file, I installed ubuntu on a virtual machine. Then I installed pip, paycharm, kivy. Qivy installed through the terminal according to the instructions with their site. I typed the code and got an error:run() missing 1 required positional argument: 'self'. I tried to google but I couldn’t find anything really.
from kivy.app import App
from kivy.uix.floatlayout import FloatLayout
class Container(FloatLayout):
pass
class MyApp(App):
def build(self):
return Container()
if __name__=='__main__':
MyApp().run()
in .kv file
<Container>:
Button:
background_normal: ''
background_color: 0.5, 0, 1, .5
size_hint: 0.3, 0.3
pos_hint: {'center_x' : 0.5 , 'center_y':0.5}
text:'Push'
color: 0,1,0.5,1
on_release:
self.text = 'on release'
full error traceback
A:
the .kv file does not support multi-line entries so far as I know. The method on_release needs to reference a function and you would normally put that in the widget (root.your_function) or app (app.your_function). I changed the answer to use build_string only for convenience; it is a good idea to use a separate .kv file in your application as you did.
from kivy.app import App
from kivy.uix.floatlayout import FloatLayout
from kivy.lang import Builder
Builder.load_string('''
<Container>:
Button:
background_normal: ''
background_color: 0.5, 0, 1, .5
size_hint: 0.3, 0.3
pos_hint: {'center_x' : 0.5 , 'center_y':0.5}
text:'Push'
color: 0,1,0.5,1
# self argument here will be the button object
on_release: app.button_callback(self)
''')
class Container(FloatLayout):
pass
class MyApp(App):
def button_callback(self, my_button):
print(my_button.text)
self.count += 1
my_button.text = f"on_release {self.count}"
def build(self):
self.count = 0
return Container()
if __name__=='__main__':
MyApp().run()
| TypeError:run() missing 1 required positional argument: 'self' | I wrote a simple program in pycharm on Windows, then it ran. In order to get the apk file, I installed ubuntu on a virtual machine. Then I installed pip, paycharm, kivy. Qivy installed through the terminal according to the instructions with their site. I typed the code and got an error:run() missing 1 required positional argument: 'self'. I tried to google but I couldn’t find anything really.
from kivy.app import App
from kivy.uix.floatlayout import FloatLayout
class Container(FloatLayout):
pass
class MyApp(App):
def build(self):
return Container()
if __name__=='__main__':
MyApp().run()
in .kv file
<Container>:
Button:
background_normal: ''
background_color: 0.5, 0, 1, .5
size_hint: 0.3, 0.3
pos_hint: {'center_x' : 0.5 , 'center_y':0.5}
text:'Push'
color: 0,1,0.5,1
on_release:
self.text = 'on release'
full error traceback
| [
"the .kv file does not support multi-line entries so far as I know. The method on_release needs to reference a function and you would normally put that in the widget (root.your_function) or app (app.your_function). I changed the answer to use build_string only for convenience; it is a good idea to use a separate .kv file in your application as you did.\nfrom kivy.app import App\nfrom kivy.uix.floatlayout import FloatLayout\nfrom kivy.lang import Builder\n\nBuilder.load_string('''\n<Container>:\n Button:\n background_normal: ''\n background_color: 0.5, 0, 1, .5\n size_hint: 0.3, 0.3\n pos_hint: {'center_x' : 0.5 , 'center_y':0.5}\n text:'Push'\n color: 0,1,0.5,1\n # self argument here will be the button object\n on_release: app.button_callback(self)\n''')\n\n\nclass Container(FloatLayout):\n pass\n\n\nclass MyApp(App):\n\n def button_callback(self, my_button):\n print(my_button.text)\n self.count += 1\n my_button.text = f\"on_release {self.count}\"\n\n def build(self):\n self.count = 0\n return Container()\n\n\nif __name__=='__main__':\n MyApp().run()\n\n"
] | [
0
] | [] | [] | [
"kivy",
"python",
"ubuntu"
] | stackoverflow_0074673204_kivy_python_ubuntu.txt |
Q:
pickle data was truncated
i created a corpus file then stored in a pickle file.
my messages file is a collection of different news articles dataframe.
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
import re
ps = PorterStemmer()
corpus = []
for i in range(0, len(messages)):
review = re.sub('[^a-zA-Z]', ' ', messages['text'][i])
review = review.lower()
review = review.split()
review = [ps.stem(word) for word in review if not word in stopwords.words('english')]
review = ' '.join(review)
#print(i)
corpus.append(review)
import pickle
with open('corpus.pkl', 'wb') as f:
pickle.dump(corpus, f)
same code I ran on my laptop (jupyter notebook) and on google colab.
corpus.pkl => Google colab, downloaded with the following code:
from google.colab import files
files.download('corpus.pkl')
corpus1.pkl => saved from jupyter notebook code.
now When I run this code:
import pickle
with open('corpus.pkl', 'rb') as f: # google colab
corpus = pickle.load(f)
I get the following error:
UnpicklingError: pickle data was truncated
But this works fine:
import pickle
with open('corpus1.pkl', 'rb') as f: # jupyter notebook saved
corpus = pickle.load(f)
The only difference between both is that corpus1.pkl is run and saved through Jupyter notebook (on local) and corpus.pkl is saved on google collab and downloaded.
Could anybody tell me why is this happening?
for reference..
corpus.pkl => 36 MB
corpus1.pkl => 50.5 MB
A:
i would use pickle file created by my local machine only, that works properly
A:
problem occurs due to partial download of glove vectors i have uploaded the data
through colab upload to session storage and after that simply write this command
it works very well.
with open('/content/glove_vectors', 'rb') as f:
model = pickle.load(f)
glove_words = set(model.keys())
| pickle data was truncated | i created a corpus file then stored in a pickle file.
my messages file is a collection of different news articles dataframe.
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
import re
ps = PorterStemmer()
corpus = []
for i in range(0, len(messages)):
review = re.sub('[^a-zA-Z]', ' ', messages['text'][i])
review = review.lower()
review = review.split()
review = [ps.stem(word) for word in review if not word in stopwords.words('english')]
review = ' '.join(review)
#print(i)
corpus.append(review)
import pickle
with open('corpus.pkl', 'wb') as f:
pickle.dump(corpus, f)
same code I ran on my laptop (jupyter notebook) and on google colab.
corpus.pkl => Google colab, downloaded with the following code:
from google.colab import files
files.download('corpus.pkl')
corpus1.pkl => saved from jupyter notebook code.
now When I run this code:
import pickle
with open('corpus.pkl', 'rb') as f: # google colab
corpus = pickle.load(f)
I get the following error:
UnpicklingError: pickle data was truncated
But this works fine:
import pickle
with open('corpus1.pkl', 'rb') as f: # jupyter notebook saved
corpus = pickle.load(f)
The only difference between both is that corpus1.pkl is run and saved through Jupyter notebook (on local) and corpus.pkl is saved on google collab and downloaded.
Could anybody tell me why is this happening?
for reference..
corpus.pkl => 36 MB
corpus1.pkl => 50.5 MB
| [
"i would use pickle file created by my local machine only, that works properly\n",
"problem occurs due to partial download of glove vectors i have uploaded the data\nthrough colab upload to session storage and after that simply write this command\nit works very well.\nwith open('/content/glove_vectors', 'rb') as f:\nmodel = pickle.load(f)\nglove_words = set(model.keys())\n\n"
] | [
0,
0
] | [] | [] | [
"data_science",
"data_science_experience",
"pickle",
"python"
] | stackoverflow_0061718355_data_science_data_science_experience_pickle_python.txt |
Q:
Creating threaded popups in PySimpleGUI
I have trouble creating either multiple windows or popups using PySimpleGUI.
Each window/popup is supposed to be called from a seperate thread and timeout after 2 seconds.
Using following implementation results in (as expected) this error: main thread is not in main loop.
How do I fix that?
def get_info():
while True:
info = get_details()
if info:
layout[]
window = sgWindow(...)
while True:
event, values = window.read(timeout=1000*2)
if event in (sg.WIN_CLOSED,): break
if event in ('__TIMEOUT__',):
window.close()
break
if event == "X":
window.close()
close = True
break
if event == "Y":
window.close()
close = True
break
for i in range(x):
t = threading.Thread(target=get_info())
t.start()
A:
This isn't possible because PySimpleGUI is not thread-safe. This means that you can only use it in the main thread of your program.
To fix this error, you can use a queue to communicate between the main thread and the other threads. Each thread can add an event to the queue, and the main thread can read from the queue and handle the events as they come in. This way, you can still use PySimpleGUI in the main thread while running your other code in separate threads.
import threading
import queue
import PySimpleGUI as sg
# Create a queue to communicate between threads
q = queue.Queue()
# Define a function to run in a separate thread
def get_info():
while True:
# Check if we need to exit the thread
if x:
break
# Create a window and show it
layout = []
window = sg.Window('Window', layout)
window.finalize()
window.show()
# Read events from the window
while True:
event, values = window.read(timeout=1000*2)
if event in (sg.WIN_CLOSED,):
break
if event in ('__TIMEOUT__',):
# Add an event to the queue to close the window in the main thread
q.put('close_window')
break
if event == "X":
# Add an event to the queue to close the window and set the "close" flag in the main thread
q.put('close_window_and_flag')
break
if event == "Y":
# Add an event to the queue to close the window and set the "close" flag in the main thread
q.put('close_window_and_flag')
break
# Start the thread
t = threading.Thread(target=get_info())
t.start()
# Main loop
while True:
# Check for events in the queue
if not q.empty():
event = q.get()
if event == 'close_window':
# Close the window in the main thread
window.close()
elif event == 'close_window_and_flag':
# Close the window and set the "close" flag in the main thread
window.close()
close = True
| Creating threaded popups in PySimpleGUI | I have trouble creating either multiple windows or popups using PySimpleGUI.
Each window/popup is supposed to be called from a seperate thread and timeout after 2 seconds.
Using following implementation results in (as expected) this error: main thread is not in main loop.
How do I fix that?
def get_info():
while True:
info = get_details()
if info:
layout[]
window = sgWindow(...)
while True:
event, values = window.read(timeout=1000*2)
if event in (sg.WIN_CLOSED,): break
if event in ('__TIMEOUT__',):
window.close()
break
if event == "X":
window.close()
close = True
break
if event == "Y":
window.close()
close = True
break
for i in range(x):
t = threading.Thread(target=get_info())
t.start()
| [
"This isn't possible because PySimpleGUI is not thread-safe. This means that you can only use it in the main thread of your program.\nTo fix this error, you can use a queue to communicate between the main thread and the other threads. Each thread can add an event to the queue, and the main thread can read from the queue and handle the events as they come in. This way, you can still use PySimpleGUI in the main thread while running your other code in separate threads.\nimport threading\nimport queue\nimport PySimpleGUI as sg\n\n# Create a queue to communicate between threads\nq = queue.Queue()\n\n# Define a function to run in a separate thread\ndef get_info():\n while True:\n # Check if we need to exit the thread\n if x:\n break\n \n # Create a window and show it\n layout = []\n window = sg.Window('Window', layout)\n window.finalize()\n window.show()\n \n # Read events from the window\n while True:\n event, values = window.read(timeout=1000*2)\n if event in (sg.WIN_CLOSED,):\n break\n if event in ('__TIMEOUT__',):\n # Add an event to the queue to close the window in the main thread\n q.put('close_window')\n break\n if event == \"X\":\n # Add an event to the queue to close the window and set the \"close\" flag in the main thread\n q.put('close_window_and_flag')\n break\n if event == \"Y\":\n # Add an event to the queue to close the window and set the \"close\" flag in the main thread\n q.put('close_window_and_flag')\n break\n\n# Start the thread\nt = threading.Thread(target=get_info())\nt.start()\n\n# Main loop\nwhile True:\n # Check for events in the queue\n if not q.empty():\n event = q.get()\n if event == 'close_window':\n # Close the window in the main thread\n window.close()\n elif event == 'close_window_and_flag':\n # Close the window and set the \"close\" flag in the main thread\n window.close()\n close = True\n\n"
] | [
1
] | [] | [] | [
"multithreading",
"pysimplegui",
"python",
"user_interface"
] | stackoverflow_0074678475_multithreading_pysimplegui_python_user_interface.txt |
Q:
How can I write this sql query in django orm?
I have a sql query that works like this, but I couldn't figure out how to write this query in django. Can you help me ?
select datetime,
array_to_json(array_agg(json_build_object(parameter, raw))) as parameters
from dbp_istasyondata
group by 1
order by 1;
A:
You can use raw function of django orm. You can write your query like this:
YourModel.objects.raw('select * from your table'): #---> Change the model name and query
| How can I write this sql query in django orm? | I have a sql query that works like this, but I couldn't figure out how to write this query in django. Can you help me ?
select datetime,
array_to_json(array_agg(json_build_object(parameter, raw))) as parameters
from dbp_istasyondata
group by 1
order by 1;
| [
"You can use raw function of django orm. You can write your query like this:\nYourModel.objects.raw('select * from your table'): #---> Change the model name and query\n\n\n"
] | [
0
] | [] | [] | [
"django",
"django_models",
"django_orm",
"python"
] | stackoverflow_0074678065_django_django_models_django_orm_python.txt |
Q:
How to make a movie out of images in python
I currently try to make a movie out of images, but i could not find anything helpful .
Here is my code so far:
import time
from PIL import ImageGrab
x =0
while True:
try:
x+= 1
ImageGrab().grab().save('img{}.png'.format(str(x))
except:
movie = #Idontknow
for _ in range(x):
movie.save("img{}.png".format(str(_)))
movie.save()
A:
You could consider using an external tool like ffmpeg to merge the images into a movie (see answer here) or you could try to use OpenCv to combine the images into a movie like the example here.
I'm attaching below a code snipped I used to combine all png files from a folder called "images" into a video.
import cv2
import os
image_folder = 'images'
video_name = 'video.avi'
images = [img for img in os.listdir(image_folder) if img.endswith(".png")]
frame = cv2.imread(os.path.join(image_folder, images[0]))
height, width, layers = frame.shape
video = cv2.VideoWriter(video_name, 0, 1, (width,height))
for image in images:
video.write(cv2.imread(os.path.join(image_folder, image)))
cv2.destroyAllWindows()
video.release()
It seems that the most commented section of this answer is the use of VideoWriter. You can look up it's documentation in the link of this answer (static) or you can do a bit of digging of your own. The first parameter is the filename, followed by an integer (fourcc in the documentation, the codec used), the FPS count and a tuple of the dimensions of the frame. If you really like digging in that can of worms, here's the fourcc video codecs list.
A:
Thanks , but i found an alternative solution using ffmpeg:
def save():
os.system("ffmpeg -r 1 -i img%01d.png -vcodec mpeg4 -y movie.mp4")
But thank you for your help :)
A:
Here is a minimal example using moviepy. For me this was the easiest solution.
import os
import moviepy.video.io.ImageSequenceClip
image_folder='folder_with_images'
fps=1
image_files = [os.path.join(image_folder,img)
for img in os.listdir(image_folder)
if img.endswith(".png")]
clip = moviepy.video.io.ImageSequenceClip.ImageSequenceClip(image_files, fps=fps)
clip.write_videofile('my_video.mp4')
A:
I use the ffmpeg-python binding. You can find more information here.
import ffmpeg
(
ffmpeg
.input('/path/to/jpegs/*.jpg', pattern_type='glob', framerate=25)
.output('movie.mp4')
.run()
)
A:
When using moviepy's ImageSequenceClip it is important that the images are in an ordered sequence.
While the documentation states that the frames can be ordered alphanumerically under the hood, I found this not to be the case.
So, if you are having problems, make sure to manually order the frames first.
A:
@Wei Shan Lee (and others): Sure, my whole code looks like this
import os
import moviepy.video.io.ImageSequenceClip
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
image_files = []
for img_number in range(1,20):
image_files.append(path_to_images + 'image_folder/image_' + str(img_number) + '.png')
fps = 30
clip = moviepy.video.io.ImageSequenceClip.ImageSequenceClip(image_files, fps=fps)
clip.write_videofile(path_to_videos + 'my_new_video.mp4')
A:
I've created a function to do this. Similar to the first answer (using opencv) but wanted to add that for me, ".mp4" format did not work. That's why I use the raise within the function.
import cv2
import typing
def write_video(video_path_out:str,
frames_sequence:typing.Tuple[np.ndarray,...]):
if ".mp4" in video_path_out: raise ValueError("[ERROR] This method does not support .mp4; try .avi instead")
height, width, _ = frames_sequence[0].shape
# 0 means no preprocesing
# 1 means each image will be played with 1 sec delay (1fps)
out = cv2.VideoWriter(video_path_out,0, 1,(width,height))
for frame in frames_sequence:
out.write(frame)
out.release()
# you can use as much images as you need, I just use 3 for this example
# put your img1_path,img2_path, img3_path
img1 = cv2.imread(img1_path)
img2 = cv2.imread(img2_path)
img3 = cv2.imread(img3_path)
# img1 can be cv2.imread out; which is a np.ndarray; you can also se PIL
# if you'd like to.
frames_sequence = [img1,img2,img3]
write_video(video_path_out = "mypath_outvideo.avi",
frames_sequence = frames_sequence
)
Hope it's useful!
| How to make a movie out of images in python | I currently try to make a movie out of images, but i could not find anything helpful .
Here is my code so far:
import time
from PIL import ImageGrab
x =0
while True:
try:
x+= 1
ImageGrab().grab().save('img{}.png'.format(str(x))
except:
movie = #Idontknow
for _ in range(x):
movie.save("img{}.png".format(str(_)))
movie.save()
| [
"You could consider using an external tool like ffmpeg to merge the images into a movie (see answer here) or you could try to use OpenCv to combine the images into a movie like the example here.\nI'm attaching below a code snipped I used to combine all png files from a folder called \"images\" into a video.\nimport cv2\nimport os\n\nimage_folder = 'images'\nvideo_name = 'video.avi'\n\nimages = [img for img in os.listdir(image_folder) if img.endswith(\".png\")]\nframe = cv2.imread(os.path.join(image_folder, images[0]))\nheight, width, layers = frame.shape\n\nvideo = cv2.VideoWriter(video_name, 0, 1, (width,height))\n\nfor image in images:\n video.write(cv2.imread(os.path.join(image_folder, image)))\n\ncv2.destroyAllWindows()\nvideo.release()\n\nIt seems that the most commented section of this answer is the use of VideoWriter. You can look up it's documentation in the link of this answer (static) or you can do a bit of digging of your own. The first parameter is the filename, followed by an integer (fourcc in the documentation, the codec used), the FPS count and a tuple of the dimensions of the frame. If you really like digging in that can of worms, here's the fourcc video codecs list.\n",
"Thanks , but i found an alternative solution using ffmpeg:\ndef save():\n os.system(\"ffmpeg -r 1 -i img%01d.png -vcodec mpeg4 -y movie.mp4\")\n\nBut thank you for your help :) \n",
"Here is a minimal example using moviepy. For me this was the easiest solution.\nimport os\nimport moviepy.video.io.ImageSequenceClip\nimage_folder='folder_with_images'\nfps=1\n\nimage_files = [os.path.join(image_folder,img)\n for img in os.listdir(image_folder)\n if img.endswith(\".png\")]\nclip = moviepy.video.io.ImageSequenceClip.ImageSequenceClip(image_files, fps=fps)\nclip.write_videofile('my_video.mp4')\n\n",
"I use the ffmpeg-python binding. You can find more information here.\nimport ffmpeg\n(\n ffmpeg\n .input('/path/to/jpegs/*.jpg', pattern_type='glob', framerate=25)\n .output('movie.mp4')\n .run()\n)\n\n",
"When using moviepy's ImageSequenceClip it is important that the images are in an ordered sequence.\nWhile the documentation states that the frames can be ordered alphanumerically under the hood, I found this not to be the case.\nSo, if you are having problems, make sure to manually order the frames first.\n",
"@Wei Shan Lee (and others): Sure, my whole code looks like this\nimport os\nimport moviepy.video.io.ImageSequenceClip\nfrom PIL import Image, ImageFile\nImageFile.LOAD_TRUNCATED_IMAGES = True\n\nimage_files = []\n\nfor img_number in range(1,20): \n image_files.append(path_to_images + 'image_folder/image_' + str(img_number) + '.png') \n\nfps = 30\n\nclip = moviepy.video.io.ImageSequenceClip.ImageSequenceClip(image_files, fps=fps)\nclip.write_videofile(path_to_videos + 'my_new_video.mp4')\n\n",
"I've created a function to do this. Similar to the first answer (using opencv) but wanted to add that for me, \".mp4\" format did not work. That's why I use the raise within the function.\nimport cv2\nimport typing\n\ndef write_video(video_path_out:str,\n frames_sequence:typing.Tuple[np.ndarray,...]):\n \n if \".mp4\" in video_path_out: raise ValueError(\"[ERROR] This method does not support .mp4; try .avi instead\")\n height, width, _ = frames_sequence[0].shape\n # 0 means no preprocesing\n # 1 means each image will be played with 1 sec delay (1fps)\n out = cv2.VideoWriter(video_path_out,0, 1,(width,height))\n for frame in frames_sequence:\n out.write(frame)\n out.release()\n\n# you can use as much images as you need, I just use 3 for this example\n# put your img1_path,img2_path, img3_path\n\nimg1 = cv2.imread(img1_path)\nimg2 = cv2.imread(img2_path)\nimg3 = cv2.imread(img3_path)\n# img1 can be cv2.imread out; which is a np.ndarray; you can also se PIL\n# if you'd like to.\nframes_sequence = [img1,img2,img3]\n\nwrite_video(video_path_out = \"mypath_outvideo.avi\",\nframes_sequence = frames_sequence\n)\n\nHope it's useful!\n"
] | [
147,
54,
29,
16,
1,
0,
0
] | [] | [] | [
"image",
"python",
"screenshot",
"video"
] | stackoverflow_0044947505_image_python_screenshot_video.txt |
Q:
How to store data in seperate memory mdoules
I'm working on an image processing pipeline in Python and I'm using Cython for the main computation so that it can run really fast. From early benchmarks, I found a memory bottleneck where the code would not scale at all using multiple threads.
I revised the algorithm a bit to reduce the bandwidth required and now it scales to 2 cores (4 threads with hyperthreading) but it still becomes bottlenecked by memory bandwidth. You can find the different versions of the algorithm here if you are curious: https://github.com/2332575Y/
I have confirmed this by running the benchmark on an i7-6700HQ (scales to 4 threads), i5-7600K (scales to 2 threads (cores) as the i5 has no hyper-threading), and an R9-5950X (scales to 4 threads). Also despite the massive performance differences between these CPUs, the relative performance between them is exactly the same difference between the memory speeds. You can find the benchmarks performed by the 6700HQ here:
https://github.com/2332575Y/Retina-V3/blob/main/Untitled.ipynb
and the 5950x benchmarks are:
All of these benchmarks are performed without any manual memory management and since the overall size of the data is relatively small (120MB), I would assume python puts them on a single memory stick (all of the systems have dual-channel memory). I'm not sure if it is possible to somehow tell python to split the data and store it on different physical memory modules so that the algorithm can take advantage of the dual-channel memory. I tried googling ways to do this in C++ but that wasn't successful either. Is memory automatically managed by the OS or is it possible to do this?
P.S.: before you comment, I have made sure to split the inputs as evenly as possible. Also, the sampling algorithm is extremely simple (multiply and accumulate), so having a memory bottleneck is not an absurd concept (it is actually pretty common in image processing algorithms).
A:
The OS manages splitting the program virtual address space to the different physical addresses (Ram sticks, pagefile, etc) this is transparent to python or any programming language, all systems were already using both sticks for read and write.
The fact that both float64 and float32 have the same performance means each core cache is almost never used, so consider making better use of it in your algorithms, make your code more cache friendly.
While this may seem hard for a loop that just computes a multiplication, you can group computations to reduce the number of times you access the memory.
Edit: also consider shifting the work to the GPU or TPU as dedicated gpus are very good at convolutions, and have much faster memories.
| How to store data in seperate memory mdoules | I'm working on an image processing pipeline in Python and I'm using Cython for the main computation so that it can run really fast. From early benchmarks, I found a memory bottleneck where the code would not scale at all using multiple threads.
I revised the algorithm a bit to reduce the bandwidth required and now it scales to 2 cores (4 threads with hyperthreading) but it still becomes bottlenecked by memory bandwidth. You can find the different versions of the algorithm here if you are curious: https://github.com/2332575Y/
I have confirmed this by running the benchmark on an i7-6700HQ (scales to 4 threads), i5-7600K (scales to 2 threads (cores) as the i5 has no hyper-threading), and an R9-5950X (scales to 4 threads). Also despite the massive performance differences between these CPUs, the relative performance between them is exactly the same difference between the memory speeds. You can find the benchmarks performed by the 6700HQ here:
https://github.com/2332575Y/Retina-V3/blob/main/Untitled.ipynb
and the 5950x benchmarks are:
All of these benchmarks are performed without any manual memory management and since the overall size of the data is relatively small (120MB), I would assume python puts them on a single memory stick (all of the systems have dual-channel memory). I'm not sure if it is possible to somehow tell python to split the data and store it on different physical memory modules so that the algorithm can take advantage of the dual-channel memory. I tried googling ways to do this in C++ but that wasn't successful either. Is memory automatically managed by the OS or is it possible to do this?
P.S.: before you comment, I have made sure to split the inputs as evenly as possible. Also, the sampling algorithm is extremely simple (multiply and accumulate), so having a memory bottleneck is not an absurd concept (it is actually pretty common in image processing algorithms).
| [
"The OS manages splitting the program virtual address space to the different physical addresses (Ram sticks, pagefile, etc) this is transparent to python or any programming language, all systems were already using both sticks for read and write.\nThe fact that both float64 and float32 have the same performance means each core cache is almost never used, so consider making better use of it in your algorithms, make your code more cache friendly.\nWhile this may seem hard for a loop that just computes a multiplication, you can group computations to reduce the number of times you access the memory.\nEdit: also consider shifting the work to the GPU or TPU as dedicated gpus are very good at convolutions, and have much faster memories.\n"
] | [
2
] | [] | [] | [
"memory",
"memory_management",
"multithreading",
"python"
] | stackoverflow_0074678047_memory_memory_management_multithreading_python.txt |
Q:
use Elasticsearch, for h in one['hits']['hits']: KeyError: 'hits'
the es codeenter image description here
wrong error
I use above code to do elasticsearch, but meet the wrong error, for h in one['hits']['hits']:
KeyError: 'hits'
A:
It looks like you are trying to iterate through the hits returned by an Elasticsearch query, but the hits key is not present in the one dictionary. This is likely because your Elasticsearch query did not return any results, or because there was an error in the query itself.
If you want to iterate through the hits returned by an Elasticsearch query, you should first check that the hits key is present in the one dictionary. You can do this using an if statement, like this:
if 'hits' in one:
for h in one['hits']['hits']:
# do something with each hit here
Alternatively, you can use a try...except block to handle the KeyError if it occurs:
try:
for h in one['hits']['hits']:
# do something with each hit here
except KeyError:
print("No hits found in Elasticsearch query")
It is also a good idea to check the status of the Elasticsearch query by looking at the status_code of the response. If the query was successful, the status_code should be 200. If there was an error in the query, the status_code will be different and you can use it to troubleshoot the problem.
| use Elasticsearch, for h in one['hits']['hits']: KeyError: 'hits' | the es codeenter image description here
wrong error
I use above code to do elasticsearch, but meet the wrong error, for h in one['hits']['hits']:
KeyError: 'hits'
| [
"It looks like you are trying to iterate through the hits returned by an Elasticsearch query, but the hits key is not present in the one dictionary. This is likely because your Elasticsearch query did not return any results, or because there was an error in the query itself.\nIf you want to iterate through the hits returned by an Elasticsearch query, you should first check that the hits key is present in the one dictionary. You can do this using an if statement, like this:\nif 'hits' in one:\n for h in one['hits']['hits']:\n # do something with each hit here\n\n\nAlternatively, you can use a try...except block to handle the KeyError if it occurs:\ntry:\n for h in one['hits']['hits']:\n # do something with each hit here\nexcept KeyError:\n print(\"No hits found in Elasticsearch query\")\n\nIt is also a good idea to check the status of the Elasticsearch query by looking at the status_code of the response. If the query was successful, the status_code should be 200. If there was an error in the query, the status_code will be different and you can use it to troubleshoot the problem.\n"
] | [
0
] | [] | [] | [
"data_retrieval",
"elasticsearch",
"http",
"information_retrieval",
"python"
] | stackoverflow_0074665990_data_retrieval_elasticsearch_http_information_retrieval_python.txt |
Q:
django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'webservice_again.CustomUser' that has not been installed
Complete Error:
"AUTH_USER_MODEL refers to model '%s' that has not been installed" % settings.AUTH_USER_MODEL
django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'webservice_again.CustomUser' that has not been installed
model.py
from builtins import ValueError
from datetime import date
import django
from django.contrib.auth.base_user import BaseUserManager
from django.contrib.auth.models import AbstractBaseUser
from django.db import models
class CustomUserManager(BaseUserManager):
def create_user(self, email, password= None, full_name="ABC",type = 0):
if not email:
raise ValueError("Email Required.")
if not password:
raise ValueError("Password Required.")
user_obj =self.model(
self.normalize_email(email)
)
user_obj.set_password(password)
user_obj.full_name = full_name
user_obj.type = type
user_obj.save(using = self._db)
return user_obj
class CustomUser(AbstractBaseUser):
email =models.EmailField(unique=True, max_length=255)
full_name = models.CharField(max_length=255, blank=False)
dob = models.DateField(default=date.today)
type = models.IntegerField(default=0)
create_time = models.DateTimeField(auto_now_add=True)
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['full_name']
objects = CustomUserManager()
def get_full_name(self):
return self.full_name
def __str__(self):
return self.full_name
settings.py
INSTALLED_APPS = [
'webservice_again',
'web_service',
'rest_framework',
'rest_framework.authtoken',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
AUTH_USER_MODEL = "webservice_again.CustomUser"
Guys, I know this question is duplicate, but after going through all the provided solutions, i am asking this solution.
Any help is appreciated.
A:
By default django looks for models in models.py. Try changing model.py file to models.py. If you somehow have a models folder which houses all your model files, then import the CustomUser model in __init__.py file located within the models folder. This should solve it!
| django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'webservice_again.CustomUser' that has not been installed | Complete Error:
"AUTH_USER_MODEL refers to model '%s' that has not been installed" % settings.AUTH_USER_MODEL
django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'webservice_again.CustomUser' that has not been installed
model.py
from builtins import ValueError
from datetime import date
import django
from django.contrib.auth.base_user import BaseUserManager
from django.contrib.auth.models import AbstractBaseUser
from django.db import models
class CustomUserManager(BaseUserManager):
def create_user(self, email, password= None, full_name="ABC",type = 0):
if not email:
raise ValueError("Email Required.")
if not password:
raise ValueError("Password Required.")
user_obj =self.model(
self.normalize_email(email)
)
user_obj.set_password(password)
user_obj.full_name = full_name
user_obj.type = type
user_obj.save(using = self._db)
return user_obj
class CustomUser(AbstractBaseUser):
email =models.EmailField(unique=True, max_length=255)
full_name = models.CharField(max_length=255, blank=False)
dob = models.DateField(default=date.today)
type = models.IntegerField(default=0)
create_time = models.DateTimeField(auto_now_add=True)
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['full_name']
objects = CustomUserManager()
def get_full_name(self):
return self.full_name
def __str__(self):
return self.full_name
settings.py
INSTALLED_APPS = [
'webservice_again',
'web_service',
'rest_framework',
'rest_framework.authtoken',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
AUTH_USER_MODEL = "webservice_again.CustomUser"
Guys, I know this question is duplicate, but after going through all the provided solutions, i am asking this solution.
Any help is appreciated.
| [
"By default django looks for models in models.py. Try changing model.py file to models.py. If you somehow have a models folder which houses all your model files, then import the CustomUser model in __init__.py file located within the models folder. This should solve it!\n"
] | [
0
] | [] | [] | [
"django",
"django_models",
"django_users",
"python"
] | stackoverflow_0055203871_django_django_models_django_users_python.txt |
Q:
Celery worker running tensorflow unable to create CUDA event
I am loading tensorflow model to the celery worker but when I try to run a task on the worker, it shows the following error:
[2018-09-19 10:29:39,753: INFO/MainProcess] Received task: analyze_atom[f6bb76cc-aa16-4761-a7cf-0ed111886ff8]
[2018-09-19 10:29:41,198: WARNING/ForkPoolWorker-2] paper checkpoint1 takes 1.433300495147705 senconds
2018-09-19 10:29:41.318467: E tensorflow/core/grappler/clusters/utils.cc:81] Failed to get device properties, error code: 3
2018-09-19 10:29:42.650529: E tensorflow/stream_executor/event.cc:40] could not create CUDA event: CUDA_ERROR_NOT_INITIALIZED
[2018-09-19 10:29:42,673: ERROR/MainProcess] Process 'ForkPoolWorker-2' pid:3782 exited with 'signal 11 (SIGSEGV)'
[2018-09-19 10:29:42,704: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: signal 11 (SIGSEGV).',)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/billiard/pool.py", line 1223, in mark_as_worker_lost
human_status(exitcode)),
billiard.exceptions.WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV).
This is a tensorflow model and when the celery is started the model is loaded successfully on GPU, here is the work started log:
totalMemory: 15.90GiB freeMemory: 15.61GiB
2018-09-19 10:35:38.431559: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-19 10:35:38.793007: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-19 10:35:38.793054: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-09-19 10:35:38.793063: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-09-19 10:35:38.793487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15131 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
2018-09-19 10:35:40.552010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-19 10:35:40.552073: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-19 10:35:40.552080: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-09-19 10:35:40.552085: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-09-19 10:35:40.552327: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15131 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
2018-09-19 10:35:41.304281: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-19 10:35:41.304336: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-19 10:35:41.304344: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-09-19 10:35:41.304348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-09-19 10:35:41.304574: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15131 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
2018-09-19 10:35:43.013963: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-19 10:35:43.014025: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-19 10:35:43.014033: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-09-19 10:35:43.014038: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-09-19 10:35:43.037554: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15131 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
2018-09-19 10:35:43.916442: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-19 10:35:43.916500: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-19 10:35:43.916507: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-09-19 10:35:43.916512: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-09-19 10:35:43.916752: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15131 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
2018-09-19 10:35:44.137238: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-19 10:35:44.137296: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-19 10:35:44.137304: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-09-19 10:35:44.137308: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-09-19 10:35:44.137563: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15131 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
[2018-09-19 10:35:44,650: INFO/MainProcess] Connected to amqp://yjyx:**@118.178.129.156:5672/yjyx
[2018-09-19 10:35:44,667: INFO/MainProcess] mingle: searching for neighbors
[2018-09-19 10:35:45,716: INFO/MainProcess] mingle: sync with 1 nodes
[2018-09-19 10:35:45,717: INFO/MainProcess] mingle: sync complete
[2018-09-19 10:35:45,750: INFO/MainProcess] celery@yjyx-gpu-1 ready.
I also see that the GPU memory is allocated:
I am using supervisor to run celery and here is supervisor config:
[program:celeryworker_paperanalyzer]
process_name=%(process_num)02d
directory=/home/yjyx/yijiao_src/yijiao_main
command=celery worker -A project.celerytasks.celery_worker_init -Q paperanalyzer -c 2 --loglevel=INFO
user=yjyx
numprocs=1
stdout_logfile=/home/yjyx/log/celeryworker_paperanalyzer0.log
stderr_logfile=/home/yjyx/log/celeryworker_paperanalyzer1.log
stdout_logfile_maxbytes=50MB ; maximum size of logfile before rotation
stderr_logfile_maxbytes=50MB
stderr_logfile_backups=10 ; number of backed up logfiles
stdout_logfile_backups=10
autostart=false
autorestart=false
startsecs=5
stopwaitsecs=8
killasgroup=true
priority=1000
Here is celery task code snippet:
@shared_task(name="analyze_atom", queue="paperanalyzer")
def analyze_atom(image_urls, targetdir=target_path, studentuid=None):
try:
if targetdir is not None and os.path.exists(targetdir):
os.chdir(targetdir)
paper = Paper(image_urls, studentuid)
for image_url in paper.image_urls:
if type(image_url) == str:
paper.analyze(image_url) # tensorflow inference get called within paper.analyze
elif type(image_url) == dict:
paper.analyze(image_url['url'], str(image_url['pn']), image_url.get('cormode', 0))
return paper.data
except Exception as e:
logger.log(40, traceback.print_exc())
logger.log(40, e)
return {}
I am sure the whole procedure should be OK, actually, I used opencv within paper.analyze to handle the job, and, it works well, now I just change opencv to tensorflow.
Env: Python3.6.4; Tensorflow 1.8; celery 4.0.2; OS: Centos 7.2
Any help will be really appreciated. :-)
Thanks.
Wesley
A:
Changing things to single-threaded is an easy fix. You can resolve this issue by adding -P solo to your celery command
i.e:
celery -app APP worker -P solo --loglelvel=info
Note: APP is your app name.
| Celery worker running tensorflow unable to create CUDA event | I am loading tensorflow model to the celery worker but when I try to run a task on the worker, it shows the following error:
[2018-09-19 10:29:39,753: INFO/MainProcess] Received task: analyze_atom[f6bb76cc-aa16-4761-a7cf-0ed111886ff8]
[2018-09-19 10:29:41,198: WARNING/ForkPoolWorker-2] paper checkpoint1 takes 1.433300495147705 senconds
2018-09-19 10:29:41.318467: E tensorflow/core/grappler/clusters/utils.cc:81] Failed to get device properties, error code: 3
2018-09-19 10:29:42.650529: E tensorflow/stream_executor/event.cc:40] could not create CUDA event: CUDA_ERROR_NOT_INITIALIZED
[2018-09-19 10:29:42,673: ERROR/MainProcess] Process 'ForkPoolWorker-2' pid:3782 exited with 'signal 11 (SIGSEGV)'
[2018-09-19 10:29:42,704: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: signal 11 (SIGSEGV).',)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/billiard/pool.py", line 1223, in mark_as_worker_lost
human_status(exitcode)),
billiard.exceptions.WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV).
This is a tensorflow model and when the celery is started the model is loaded successfully on GPU, here is the work started log:
totalMemory: 15.90GiB freeMemory: 15.61GiB
2018-09-19 10:35:38.431559: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-19 10:35:38.793007: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-19 10:35:38.793054: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-09-19 10:35:38.793063: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-09-19 10:35:38.793487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15131 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
2018-09-19 10:35:40.552010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-19 10:35:40.552073: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-19 10:35:40.552080: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-09-19 10:35:40.552085: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-09-19 10:35:40.552327: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15131 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
2018-09-19 10:35:41.304281: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-19 10:35:41.304336: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-19 10:35:41.304344: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-09-19 10:35:41.304348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-09-19 10:35:41.304574: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15131 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
2018-09-19 10:35:43.013963: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-19 10:35:43.014025: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-19 10:35:43.014033: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-09-19 10:35:43.014038: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-09-19 10:35:43.037554: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15131 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
2018-09-19 10:35:43.916442: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-19 10:35:43.916500: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-19 10:35:43.916507: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-09-19 10:35:43.916512: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-09-19 10:35:43.916752: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15131 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
2018-09-19 10:35:44.137238: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-19 10:35:44.137296: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-19 10:35:44.137304: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-09-19 10:35:44.137308: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-09-19 10:35:44.137563: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15131 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
[2018-09-19 10:35:44,650: INFO/MainProcess] Connected to amqp://yjyx:**@118.178.129.156:5672/yjyx
[2018-09-19 10:35:44,667: INFO/MainProcess] mingle: searching for neighbors
[2018-09-19 10:35:45,716: INFO/MainProcess] mingle: sync with 1 nodes
[2018-09-19 10:35:45,717: INFO/MainProcess] mingle: sync complete
[2018-09-19 10:35:45,750: INFO/MainProcess] celery@yjyx-gpu-1 ready.
I also see that the GPU memory is allocated:
I am using supervisor to run celery and here is supervisor config:
[program:celeryworker_paperanalyzer]
process_name=%(process_num)02d
directory=/home/yjyx/yijiao_src/yijiao_main
command=celery worker -A project.celerytasks.celery_worker_init -Q paperanalyzer -c 2 --loglevel=INFO
user=yjyx
numprocs=1
stdout_logfile=/home/yjyx/log/celeryworker_paperanalyzer0.log
stderr_logfile=/home/yjyx/log/celeryworker_paperanalyzer1.log
stdout_logfile_maxbytes=50MB ; maximum size of logfile before rotation
stderr_logfile_maxbytes=50MB
stderr_logfile_backups=10 ; number of backed up logfiles
stdout_logfile_backups=10
autostart=false
autorestart=false
startsecs=5
stopwaitsecs=8
killasgroup=true
priority=1000
Here is celery task code snippet:
@shared_task(name="analyze_atom", queue="paperanalyzer")
def analyze_atom(image_urls, targetdir=target_path, studentuid=None):
try:
if targetdir is not None and os.path.exists(targetdir):
os.chdir(targetdir)
paper = Paper(image_urls, studentuid)
for image_url in paper.image_urls:
if type(image_url) == str:
paper.analyze(image_url) # tensorflow inference get called within paper.analyze
elif type(image_url) == dict:
paper.analyze(image_url['url'], str(image_url['pn']), image_url.get('cormode', 0))
return paper.data
except Exception as e:
logger.log(40, traceback.print_exc())
logger.log(40, e)
return {}
I am sure the whole procedure should be OK, actually, I used opencv within paper.analyze to handle the job, and, it works well, now I just change opencv to tensorflow.
Env: Python3.6.4; Tensorflow 1.8; celery 4.0.2; OS: Centos 7.2
Any help will be really appreciated. :-)
Thanks.
Wesley
| [
"Changing things to single-threaded is an easy fix. You can resolve this issue by adding -P solo to your celery command\ni.e:\ncelery -app APP worker -P solo --loglelvel=info\n\nNote: APP is your app name.\n"
] | [
0
] | [] | [] | [
"celery",
"python",
"tensorflow"
] | stackoverflow_0052397450_celery_python_tensorflow.txt |
Q:
How to send json formatted messages to Slack through Cloud functions?
I am trying to send a json formatted message to Slack through a Cloud function using slack_sdk, if I send it like this (not formatted) it works.
client = WebClient(token='xoxb-25.......')
try:
response = client.chat_postMessage(channel='#random', text=DICTIONARY)
I found the documentation on Slack that chat_postMessage supports sending json formats by setting the HTTP headers:
Content-type: application/json
Authorization: Bearer xoxb-25xxxxxxx-xxxx
How would that work applied in my code above? I want to send a big python dictionary and would like to receive it formatted in the Slack channel. I tried adding it in multiple ways and deployment fails.
This is the documentation: https://api.slack.com/web
A:
Bit late, but I hope this can help others who stumble upon this issue in the future.
I think that you've misunderstood the documentation. The JSON support allows for accepting POST message bodies in JSON format, as only application/x-www-form-urlencoded format was supported earlier. Read more here.
To answer your question, you can try to send the dictionary by formatting it or in a code block as Slack API supports markdown.
Reference- Slack Text Formatting.
Sample Code-
from slack_sdk import WebClient
import json
client = WebClient(token="xoxb........-")
json_message = {
"title": "Tom Sawyer",
"author": "Twain, Mark",
"year_written": 1862,
"edition": "Random House",
"price": 7.75
}
# format and send as a text block
formatted_text = f"```{json.dumps(json_message, indent = 2)}```"
client.chat_postMessage(channel = "#general", text = formatted_text)
# format and send as a code block
formatted_code_block = json.dumps(json_message, indent = 2)
client.chat_postMessage(channel = "#general", text = formatted_code_block)
Output-
| How to send json formatted messages to Slack through Cloud functions? | I am trying to send a json formatted message to Slack through a Cloud function using slack_sdk, if I send it like this (not formatted) it works.
client = WebClient(token='xoxb-25.......')
try:
response = client.chat_postMessage(channel='#random', text=DICTIONARY)
I found the documentation on Slack that chat_postMessage supports sending json formats by setting the HTTP headers:
Content-type: application/json
Authorization: Bearer xoxb-25xxxxxxx-xxxx
How would that work applied in my code above? I want to send a big python dictionary and would like to receive it formatted in the Slack channel. I tried adding it in multiple ways and deployment fails.
This is the documentation: https://api.slack.com/web
| [
"Bit late, but I hope this can help others who stumble upon this issue in the future.\nI think that you've misunderstood the documentation. The JSON support allows for accepting POST message bodies in JSON format, as only application/x-www-form-urlencoded format was supported earlier. Read more here.\nTo answer your question, you can try to send the dictionary by formatting it or in a code block as Slack API supports markdown.\nReference- Slack Text Formatting.\nSample Code-\nfrom slack_sdk import WebClient\nimport json\nclient = WebClient(token=\"xoxb........-\")\n\njson_message = {\n \"title\": \"Tom Sawyer\",\n \"author\": \"Twain, Mark\",\n \"year_written\": 1862,\n \"edition\": \"Random House\",\n \"price\": 7.75\n}\n\n# format and send as a text block\nformatted_text = f\"```{json.dumps(json_message, indent = 2)}```\"\nclient.chat_postMessage(channel = \"#general\", text = formatted_text)\n\n# format and send as a code block\nformatted_code_block = json.dumps(json_message, indent = 2)\nclient.chat_postMessage(channel = \"#general\", text = formatted_code_block)\n\nOutput-\n\n"
] | [
0
] | [] | [] | [
"google_cloud_functions",
"python",
"slack",
"slack_api"
] | stackoverflow_0073580490_google_cloud_functions_python_slack_slack_api.txt |
Q:
open cv can't open/read file: check file path/integrity
I am creating a face detection algorithm which should take in images from a folder as input but I get this error:
import dlib
import argparse
import cv2
import sys
import time
import process_dlib_boxes
# construct the argument parser
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input', default=r"C:\Users\awais\OneDrive\Documents\Greenwich Uni work\Face detec work\images folder",
help='path to the input image')
parser.add_argument('-u', '--upsample', type=float,
help='factor by which to upsample the image, default None, ' +
'pass 1, 2, 3, ...')
args = vars(parser.parse_args())
# read the image and convert to RGB color format
image = cv2.imread(args['input'])
image_cvt = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# path for saving the result image
save_name = f"outputs/{args['input'].split('/')[-1].split('.')[0]}_u{args['upsample']}.jpg"
# initilaize the Dlib face detector according to the upsampling value
detector = dlib.get_frontal_face_detector()
i get this error:
[ WARN:0@0.138] global D:\a\opencv-python\opencv-python\opencv\modules\imgcodecs\src\loadsave.cpp (239) cv::findDecoder imread_('C:\Users\awais\OneDrive\Documents\Greenwich Uni work\Face detec work\images folder'): can't open/read file: check file path/integrity
Traceback (most recent call last):
File "C:\Users\awais\OneDrive\Documents\Greenwich Uni work\Face detec work\face_det_image.py", line 20, in <module>
image_cvt = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
A:
I assume the problem is that cv2.imread is returning None because it is unable to read the input image. This can happen if the file path provided to cv2.imread is incorrect or if the file does not exist.
You can try printing the value of args['input'] to make sure it is correct and points to a valid image file. You can also try using the os.path.isfile function to check if the file exists before trying to read it. Here is an example:
import dlib
import argparse
import cv2
import sys
import time
import os
import process_dlib_boxes
# construct the argument parser
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input', default=r"C:\Users\awais\OneDrive\Documents\Greenwich Uni work\Face detec work\images folder",
help='path to the input image')
parser.add_argument('-u', '--upsample', type=float,
help='factor by which to upsample the image, default None, ' +
'pass 1, 2, 3, ...')
args = vars(parser.parse_args())
# check if the input file exists
if not os.path.isfile(args['input']):
print(f"Error: The file {args['input']} does not exist")
sys.exit(1)
# read the image and convert to RGB color format
image = cv2.imread(args['input'])
image_cvt = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# path for saving the result image
save_name = f"outputs/{args['input'].split('/')[-1].split('.')[0]}_u{args['upsample']}.jpg"
# initilaize the Dlib face detector according to the upsampling value
detector = dlib.get_frontal_face_detector()
A:
imread() is meant to read single image files, not entire folders.
You must pass a path to a file, not a path to an entire directory.
You passed a path to a directory. That is why imread() failed.
| open cv can't open/read file: check file path/integrity | I am creating a face detection algorithm which should take in images from a folder as input but I get this error:
import dlib
import argparse
import cv2
import sys
import time
import process_dlib_boxes
# construct the argument parser
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input', default=r"C:\Users\awais\OneDrive\Documents\Greenwich Uni work\Face detec work\images folder",
help='path to the input image')
parser.add_argument('-u', '--upsample', type=float,
help='factor by which to upsample the image, default None, ' +
'pass 1, 2, 3, ...')
args = vars(parser.parse_args())
# read the image and convert to RGB color format
image = cv2.imread(args['input'])
image_cvt = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# path for saving the result image
save_name = f"outputs/{args['input'].split('/')[-1].split('.')[0]}_u{args['upsample']}.jpg"
# initilaize the Dlib face detector according to the upsampling value
detector = dlib.get_frontal_face_detector()
i get this error:
[ WARN:0@0.138] global D:\a\opencv-python\opencv-python\opencv\modules\imgcodecs\src\loadsave.cpp (239) cv::findDecoder imread_('C:\Users\awais\OneDrive\Documents\Greenwich Uni work\Face detec work\images folder'): can't open/read file: check file path/integrity
Traceback (most recent call last):
File "C:\Users\awais\OneDrive\Documents\Greenwich Uni work\Face detec work\face_det_image.py", line 20, in <module>
image_cvt = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
| [
"I assume the problem is that cv2.imread is returning None because it is unable to read the input image. This can happen if the file path provided to cv2.imread is incorrect or if the file does not exist.\nYou can try printing the value of args['input'] to make sure it is correct and points to a valid image file. You can also try using the os.path.isfile function to check if the file exists before trying to read it. Here is an example:\nimport dlib\nimport argparse\nimport cv2\nimport sys\nimport time\nimport os\n\nimport process_dlib_boxes\n\n# construct the argument parser\nparser = argparse.ArgumentParser()\nparser.add_argument('-i', '--input', default=r\"C:\\Users\\awais\\OneDrive\\Documents\\Greenwich Uni work\\Face detec work\\images folder\",\n help='path to the input image')\nparser.add_argument('-u', '--upsample', type=float,\n help='factor by which to upsample the image, default None, ' +\n 'pass 1, 2, 3, ...')\nargs = vars(parser.parse_args())\n\n# check if the input file exists\nif not os.path.isfile(args['input']):\n print(f\"Error: The file {args['input']} does not exist\")\n sys.exit(1)\n\n# read the image and convert to RGB color format\nimage = cv2.imread(args['input'])\nimage_cvt = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n# path for saving the result image\nsave_name = f\"outputs/{args['input'].split('/')[-1].split('.')[0]}_u{args['upsample']}.jpg\"\n# initilaize the Dlib face detector according to the upsampling value\ndetector = dlib.get_frontal_face_detector()\n\n",
"imread() is meant to read single image files, not entire folders.\nYou must pass a path to a file, not a path to an entire directory.\nYou passed a path to a directory. That is why imread() failed.\n"
] | [
0,
0
] | [] | [] | [
"cv2",
"file",
"path",
"python"
] | stackoverflow_0074677763_cv2_file_path_python.txt |
Q:
Create a nested list object with an arbitrary depth
I want to create a nested list object. In this way, the user enters a positive integer, then add empty lists to the initial list, equal to the number entered by the user. The second list should be added to the first list, the third list should be added to the second list, the fourth list should be added to the third list, and so on.
How can I do this using Python?
Example in the picture:
A:
a = []
for _ in range(x):
a = [a]
| Create a nested list object with an arbitrary depth | I want to create a nested list object. In this way, the user enters a positive integer, then add empty lists to the initial list, equal to the number entered by the user. The second list should be added to the first list, the third list should be added to the second list, the fourth list should be added to the third list, and so on.
How can I do this using Python?
Example in the picture:
| [
"a = []\nfor _ in range(x):\n a = [a]\n\n"
] | [
3
] | [] | [] | [
"python"
] | stackoverflow_0074678460_python.txt |
Q:
Rollback to specific version of a python package in Goolge Colab
I've read that
rolling back rpy2 version to v3.4.2 fixed the problem
(in this case Rpy2 Error depends on execution method: NotImplementedError: Conversion "rpy2py" not defined, but it could be any problem)
How can I change the installed version of the python package rpy2 to version v3.4.2 in Google Colab? I know the command !pip install rpy2, but how can I chose a specific version and is it a problem if there is already a newer version installed?
In other words: How can I downgrade the version of a python package in Google Colab?
A:
!pip install -Iv rpy2==3.4.2
worked for me as explained in https://stackoverflow.com/a/5226504/7735095
| Rollback to specific version of a python package in Goolge Colab | I've read that
rolling back rpy2 version to v3.4.2 fixed the problem
(in this case Rpy2 Error depends on execution method: NotImplementedError: Conversion "rpy2py" not defined, but it could be any problem)
How can I change the installed version of the python package rpy2 to version v3.4.2 in Google Colab? I know the command !pip install rpy2, but how can I chose a specific version and is it a problem if there is already a newer version installed?
In other words: How can I downgrade the version of a python package in Google Colab?
| [
"!pip install -Iv rpy2==3.4.2\n\nworked for me as explained in https://stackoverflow.com/a/5226504/7735095\n"
] | [
0
] | [] | [] | [
"google_colaboratory",
"python",
"version"
] | stackoverflow_0074678556_google_colaboratory_python_version.txt |
Q:
Mypy is not able to find an attribute defined in the parent NamedTuple
In my project I'm using Fava. Fava, is using Beancount. I have configured Mypy to read the stubs locally by setting mypy_path in mypy.ini. Mypy is able to read the config. So far so good.
Consider this function of mine
1 def get_units(postings: list[Posting]):
2 numbers = []
3 for posting in postings:
4 numbers.append(posting.units.number)
5 return numbers
When I run mypy src I get the following error
report.py:4 error: Item "type" of "Union[Amount, Type[MISSING]]" has no attribute "number" [union-attr]
When I check the defined stub here I can see the type of units which is Amount. Now, Amount is inheriting number from its parent _Amount. Going back to the stubs in Fava I can see the type here.
My question is why mypy is not able to find the attribute number although it is defined in the stubs?
A:
The type of units isn't Amount:
class Posting(NamedTuple):
account: Account
units: Union[Amount, Type[MISSING]]
It's Union[Amount, Type[MISSING]], exactly like the error message says. And if it's Type[MISSING] there is no number attribute, exactly like the error message says. If you were to run this code and units were in fact MISSING, it would raise an AttributeError on trying to access that number attribute.
(Aside: I'm not familiar with beancount, but this seems like a weird interface to me -- the more idiomatic thing IMO would be to just make it an Optional[Amount] with None representing the "missing" case.)
You need to change your code to account for the MISSING possibility so that mypy knows you're aware of it (and also so that readers of your code can see that possibility before it bites them in the form of an AttributeError). Something like:
for posting in postings:
assert isinstance(posting.units, Amount), "Units are MISSING!"
numbers.append(posting.units.number)
Obviously if you want your code to do something other than raise an AssertionError on MISSING units, you should write your code to do that instead of assert.
If you want to just assume that it's an Amount and raise an AttributeError at runtime if it's not, use typing.cast to tell mypy that you want to consider it to be an Amount even though the type stub says otherwise:
for posting in postings:
# posting.units might be MISSING, but let's assume it's not
numbers.append(cast(Amount, posting.units).number)
| Mypy is not able to find an attribute defined in the parent NamedTuple | In my project I'm using Fava. Fava, is using Beancount. I have configured Mypy to read the stubs locally by setting mypy_path in mypy.ini. Mypy is able to read the config. So far so good.
Consider this function of mine
1 def get_units(postings: list[Posting]):
2 numbers = []
3 for posting in postings:
4 numbers.append(posting.units.number)
5 return numbers
When I run mypy src I get the following error
report.py:4 error: Item "type" of "Union[Amount, Type[MISSING]]" has no attribute "number" [union-attr]
When I check the defined stub here I can see the type of units which is Amount. Now, Amount is inheriting number from its parent _Amount. Going back to the stubs in Fava I can see the type here.
My question is why mypy is not able to find the attribute number although it is defined in the stubs?
| [
"The type of units isn't Amount:\nclass Posting(NamedTuple):\n account: Account\n units: Union[Amount, Type[MISSING]]\n\nIt's Union[Amount, Type[MISSING]], exactly like the error message says. And if it's Type[MISSING] there is no number attribute, exactly like the error message says. If you were to run this code and units were in fact MISSING, it would raise an AttributeError on trying to access that number attribute.\n(Aside: I'm not familiar with beancount, but this seems like a weird interface to me -- the more idiomatic thing IMO would be to just make it an Optional[Amount] with None representing the \"missing\" case.)\nYou need to change your code to account for the MISSING possibility so that mypy knows you're aware of it (and also so that readers of your code can see that possibility before it bites them in the form of an AttributeError). Something like:\nfor posting in postings:\n assert isinstance(posting.units, Amount), \"Units are MISSING!\"\n numbers.append(posting.units.number)\n\nObviously if you want your code to do something other than raise an AssertionError on MISSING units, you should write your code to do that instead of assert.\nIf you want to just assume that it's an Amount and raise an AttributeError at runtime if it's not, use typing.cast to tell mypy that you want to consider it to be an Amount even though the type stub says otherwise:\nfor posting in postings:\n # posting.units might be MISSING, but let's assume it's not\n numbers.append(cast(Amount, posting.units).number)\n\n"
] | [
3
] | [] | [] | [
"mypy",
"python"
] | stackoverflow_0074678602_mypy_python.txt |
Q:
How to assign dynamic variables calling from a function in python
I have a function which does a bunch of stuff and returns pandas dataframes. The dataframe is extracted from a dynamic list and hence I'm using the below method to return these dataframes.
As soon as I call the function (code in 2nd block), my jupyter notebook just runs the cell infinitely like some infinity loop. Any idea how I can do this more efficiently.
funct(x):
some code which creates multiple dataframes
i = 0
for k in range(len(dynamic_list)):
i += 1
return globals()["df" + str(i)]
The next thing I do is call the function and try to assign it dynamically,
i = 0
for k in range(len(dynamic_list)):
i += 1
globals()["new_df" + str(i)] = funct(x)
I have tried returning selective dataframes from first function and it works just fine, like,
funct(x):
some code returning df1, df2, df3....., df_n
return df1, df2
new_df1, new_df2 = funct(x)
A:
for each dataframe object your code is creating you can simply add it to a dictionary and set the key from your dynamic list.
Here is a simple example:
import pandas as pd
test_data = {"key1":[1, 2, 3], "key2":[1, 2, 3], "key3":[1, 2, 3]}
df = pd.DataFrame.from_dict(test_data)
dataframe example:
key1 key2 key3
0 1 1 1
1 2 2 2
2 3 3 3
I have used a fixed list of values to focus on but this can be dynamic based on however you are creating them.
values_of_interest_list = [1, 3]
Now we can do whatever we want to do with the dataframe, in this instance, I want to filter only data where we have a value from our list.
data_dict = {}
for value_of_interest in values_of_interest_list:
x_df = df[df["key1"] == value_of_interest]
data_dict[value_of_interest] = x_df
To see what we have, we can print out the created dictionary that contains the key we have assigned and the associated dataframe object.
for key, value in data_dict.items():
print(type(key))
print(type(value))
Which returns
<class 'int'>
<class 'pandas.core.frame.DataFrame'>
<class 'int'>
<class 'pandas.core.frame.DataFrame'>
Full sample code is below:
import pandas as pd
test_data = {"key1":[1, 2, 3], "key2":[1, 2, 3], "key3":[1, 2, 3]}
df = pd.DataFrame.from_dict(test_data)
values_of_interest_list = [1, 3]
# Dictionary for data
data_dict = {}
# Loop though the values of interest
for value_of_interest in values_of_interest_list:
x_df = df[df["key1"] == value_of_interest]
data_dict[value_of_interest] = x_df
for key, value in data_dict.items():
print(type(key))
print(type(value))
| How to assign dynamic variables calling from a function in python | I have a function which does a bunch of stuff and returns pandas dataframes. The dataframe is extracted from a dynamic list and hence I'm using the below method to return these dataframes.
As soon as I call the function (code in 2nd block), my jupyter notebook just runs the cell infinitely like some infinity loop. Any idea how I can do this more efficiently.
funct(x):
some code which creates multiple dataframes
i = 0
for k in range(len(dynamic_list)):
i += 1
return globals()["df" + str(i)]
The next thing I do is call the function and try to assign it dynamically,
i = 0
for k in range(len(dynamic_list)):
i += 1
globals()["new_df" + str(i)] = funct(x)
I have tried returning selective dataframes from first function and it works just fine, like,
funct(x):
some code returning df1, df2, df3....., df_n
return df1, df2
new_df1, new_df2 = funct(x)
| [
"for each dataframe object your code is creating you can simply add it to a dictionary and set the key from your dynamic list.\nHere is a simple example:\nimport pandas as pd\n\ntest_data = {\"key1\":[1, 2, 3], \"key2\":[1, 2, 3], \"key3\":[1, 2, 3]}\ndf = pd.DataFrame.from_dict(test_data)\n\ndataframe example:\n key1 key2 key3\n0 1 1 1\n1 2 2 2\n2 3 3 3\n\nI have used a fixed list of values to focus on but this can be dynamic based on however you are creating them.\nvalues_of_interest_list = [1, 3]\n\nNow we can do whatever we want to do with the dataframe, in this instance, I want to filter only data where we have a value from our list.\ndata_dict = {}\n\nfor value_of_interest in values_of_interest_list:\n\n x_df = df[df[\"key1\"] == value_of_interest]\n\n data_dict[value_of_interest] = x_df\n\nTo see what we have, we can print out the created dictionary that contains the key we have assigned and the associated dataframe object.\nfor key, value in data_dict.items():\n print(type(key))\n print(type(value))\n\nWhich returns\n<class 'int'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'int'>\n<class 'pandas.core.frame.DataFrame'>\n\nFull sample code is below:\nimport pandas as pd\n\ntest_data = {\"key1\":[1, 2, 3], \"key2\":[1, 2, 3], \"key3\":[1, 2, 3]}\ndf = pd.DataFrame.from_dict(test_data)\n\nvalues_of_interest_list = [1, 3]\n\n# Dictionary for data\ndata_dict = {}\n\n# Loop though the values of interest\nfor value_of_interest in values_of_interest_list:\n\n x_df = df[df[\"key1\"] == value_of_interest]\n\n data_dict[value_of_interest] = x_df\n\n\nfor key, value in data_dict.items():\n print(type(key))\n print(type(value))\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074672341_pandas_python.txt |
Q:
How to perform a mathematical operation on two instances of object in Django?
I want to add two numbers from two different objects.
Here is a simplified version. I have two integers and I multiply those to get multiplied .
models.py:
class ModelA(models.Model):
number_a = models.IntegerField(default=1, null=True, blank=True)
number_b = models.IntegerField(default=1, null=True, blank=True)
def multiplied(self):
return self.number_a * self.number_b
views.py:
@login_required
def homeview(request):
numbers = ModelA.objects.all()
context = {
'numbers': numbers,
}
return TemplateResponse...
What I'm trying to do is basically multiplied + multiplied in the template but I simply can't figure out how to do it since I first have to loop through the objects.
So if I had 2 instances of ModelA and two 'multiplied' values of 100 I want to display 200 in the template. Is this possible?
A:
In your template, when you do a forloop over the numbers variable, you can directly access properties, functions and attributes.
So to access the value you want I guess it would look something like this, simplified:
{% for number in numbers %}
{{ number.multiplied }}
{% endfor %}
Hope that makes sense?
However, please take note that this is not the most efficient method.
We can make Django ask the SQL server to do the heavy lifting on the calculation side of things, so if you want to be really clever and optimise your view, you can comment out your multiplied function and replace then, you still access it in the template the same way I described above, but we needs to change the code in your view slightly like so:
numbers = ModelA.objects.aggregate(Sum('number_a', 'number_b'))
As described loosely in haduki's answer. This offloads the calculation to the SQL server by crafting an SQL query which uses the SUM SQL database function, which for all intents and purposes is almost always going to be substantially faster that having a function on the object.
A:
You can try doing that with aggregate
from django.db.models import Sum
ModelA.objects.aggregate(Sum('multiplied'))
If that does not suit you just use aggregate on each field and then add them together.
A:
The good practice is always to avoid logic on the template. It would be better to loop at the view and add calculated value to context:
def homeview(request):
queryset = ModelA.objects.all()
multipliers_addition = 0
for obj in queryset:
multipliers_addition += obj.multiplied()
context = {
'addition': multipliers_addition,
}
return render(request, 'multiply.html', context)
| How to perform a mathematical operation on two instances of object in Django? | I want to add two numbers from two different objects.
Here is a simplified version. I have two integers and I multiply those to get multiplied .
models.py:
class ModelA(models.Model):
number_a = models.IntegerField(default=1, null=True, blank=True)
number_b = models.IntegerField(default=1, null=True, blank=True)
def multiplied(self):
return self.number_a * self.number_b
views.py:
@login_required
def homeview(request):
numbers = ModelA.objects.all()
context = {
'numbers': numbers,
}
return TemplateResponse...
What I'm trying to do is basically multiplied + multiplied in the template but I simply can't figure out how to do it since I first have to loop through the objects.
So if I had 2 instances of ModelA and two 'multiplied' values of 100 I want to display 200 in the template. Is this possible?
| [
"In your template, when you do a forloop over the numbers variable, you can directly access properties, functions and attributes.\nSo to access the value you want I guess it would look something like this, simplified:\n{% for number in numbers %}\n {{ number.multiplied }}\n{% endfor %}\n\nHope that makes sense?\nHowever, please take note that this is not the most efficient method.\nWe can make Django ask the SQL server to do the heavy lifting on the calculation side of things, so if you want to be really clever and optimise your view, you can comment out your multiplied function and replace then, you still access it in the template the same way I described above, but we needs to change the code in your view slightly like so:\nnumbers = ModelA.objects.aggregate(Sum('number_a', 'number_b'))\n\nAs described loosely in haduki's answer. This offloads the calculation to the SQL server by crafting an SQL query which uses the SUM SQL database function, which for all intents and purposes is almost always going to be substantially faster that having a function on the object.\n",
"You can try doing that with aggregate\nfrom django.db.models import Sum\n\nModelA.objects.aggregate(Sum('multiplied'))\n\nIf that does not suit you just use aggregate on each field and then add them together.\n",
"The good practice is always to avoid logic on the template. It would be better to loop at the view and add calculated value to context:\ndef homeview(request):\n queryset = ModelA.objects.all()\n multipliers_addition = 0\n for obj in queryset:\n multipliers_addition += obj.multiplied()\n\n context = {\n 'addition': multipliers_addition,\n }\n\n return render(request, 'multiply.html', context)\n\n"
] | [
2,
1,
1
] | [] | [] | [
"django",
"django_templates",
"python"
] | stackoverflow_0074678108_django_django_templates_python.txt |
Q:
File is not showing when applying rasterio.open()
Here is my code
refPath = '/Users/admin/Downloads/Landsat8/'
ext = '_NDWI.tif'
for file in sorted(os.listdir(refPath)):
if file.endswith(ext):
print(file)
ndwiopen = rs.open(file)
ndwiread = ndwiopen.read(1)
Here is the error
2014_NDWI.tif
---------------------------------------------------------------------------
CPLE_OpenFailedError Traceback (most recent call last)
File rasterio/_base.pyx:302, in rasterio._base.DatasetBase.__init__()
File rasterio/_base.pyx:213, in rasterio._base.open_dataset()
File rasterio/_err.pyx:217, in rasterio._err.exc_wrap_pointer()
CPLE_OpenFailedError: 2014_NDWI.tif: No such file or directory
During handling of the above exception, another exception occurred:
RasterioIOError Traceback (most recent call last)
Input In [104], in <cell line: 33>()
34 if file.endswith(ext):
35 print(file)
---> 36 ndwiopen = rs.open(file)
38 ndwiread = ndwiopen.read(1)
39 plt.figure(figsize = (20, 15))
File /Applications/anaconda3/lib/python3.9/site-packages/rasterio/env.py:442, in ensure_env_with_credentials.<locals>.wrapper(*args, **kwds)
439 session = DummySession()
441 with env_ctor(session=session):
--> 442 return f(*args, **kwds)
File /Applications/anaconda3/lib/python3.9/site-packages/rasterio/__init__.py:277, in open(fp, mode, driver, width, height, count, crs, transform, dtype, nodata, sharing, **kwargs)
274 path = _parse_path(raw_dataset_path)
276 if mode == "r":
--> 277 dataset = DatasetReader(path, driver=driver, sharing=sharing, **kwargs)
278 elif mode == "r+":
279 dataset = get_writer_for_path(path, driver=driver)(
280 path, mode, driver=driver, sharing=sharing, **kwargs
281 )
File rasterio/_base.pyx:304, in rasterio._base.DatasetBase.__init__()
RasterioIOError: 2014_NDWI.tif: No such file or directory
As it is shown that the file is getting printed as an output but that can not be opened by the RasterIO (as rs).
Can't understand what is missing in the script.
A:
Unsure if this is your exact problem, but I rammed my head against this same exact error for 5-10 hours before I realized that the '.tif' file I was trying to read had an extension in all caps, as in '.TIF'. This is apparently the default for the Landsat 8 image bands that I was working with.
I was doing similar concatenation but my string would result in 'filename.tif' instead of the correct 'filename.TIF', so rasterio would be unable to read it. It was really frustrating, so I figured I would share how I was able to solve it since you have not yet received any replies, even though I cannot know if this was your issue. When I searched this error, this post is one of the first and most similar that would pop up but was unanswered, so I thought I would post in case any one with my issue might stumble across it as well (or, for myself when I inevitably forget in a few months how I had solved this).
| File is not showing when applying rasterio.open() | Here is my code
refPath = '/Users/admin/Downloads/Landsat8/'
ext = '_NDWI.tif'
for file in sorted(os.listdir(refPath)):
if file.endswith(ext):
print(file)
ndwiopen = rs.open(file)
ndwiread = ndwiopen.read(1)
Here is the error
2014_NDWI.tif
---------------------------------------------------------------------------
CPLE_OpenFailedError Traceback (most recent call last)
File rasterio/_base.pyx:302, in rasterio._base.DatasetBase.__init__()
File rasterio/_base.pyx:213, in rasterio._base.open_dataset()
File rasterio/_err.pyx:217, in rasterio._err.exc_wrap_pointer()
CPLE_OpenFailedError: 2014_NDWI.tif: No such file or directory
During handling of the above exception, another exception occurred:
RasterioIOError Traceback (most recent call last)
Input In [104], in <cell line: 33>()
34 if file.endswith(ext):
35 print(file)
---> 36 ndwiopen = rs.open(file)
38 ndwiread = ndwiopen.read(1)
39 plt.figure(figsize = (20, 15))
File /Applications/anaconda3/lib/python3.9/site-packages/rasterio/env.py:442, in ensure_env_with_credentials.<locals>.wrapper(*args, **kwds)
439 session = DummySession()
441 with env_ctor(session=session):
--> 442 return f(*args, **kwds)
File /Applications/anaconda3/lib/python3.9/site-packages/rasterio/__init__.py:277, in open(fp, mode, driver, width, height, count, crs, transform, dtype, nodata, sharing, **kwargs)
274 path = _parse_path(raw_dataset_path)
276 if mode == "r":
--> 277 dataset = DatasetReader(path, driver=driver, sharing=sharing, **kwargs)
278 elif mode == "r+":
279 dataset = get_writer_for_path(path, driver=driver)(
280 path, mode, driver=driver, sharing=sharing, **kwargs
281 )
File rasterio/_base.pyx:304, in rasterio._base.DatasetBase.__init__()
RasterioIOError: 2014_NDWI.tif: No such file or directory
As it is shown that the file is getting printed as an output but that can not be opened by the RasterIO (as rs).
Can't understand what is missing in the script.
| [
"Unsure if this is your exact problem, but I rammed my head against this same exact error for 5-10 hours before I realized that the '.tif' file I was trying to read had an extension in all caps, as in '.TIF'. This is apparently the default for the Landsat 8 image bands that I was working with.\nI was doing similar concatenation but my string would result in 'filename.tif' instead of the correct 'filename.TIF', so rasterio would be unable to read it. It was really frustrating, so I figured I would share how I was able to solve it since you have not yet received any replies, even though I cannot know if this was your issue. When I searched this error, this post is one of the first and most similar that would pop up but was unanswered, so I thought I would post in case any one with my issue might stumble across it as well (or, for myself when I inevitably forget in a few months how I had solved this).\n"
] | [
0
] | [] | [] | [
"image",
"python",
"rasterio"
] | stackoverflow_0073506395_image_python_rasterio.txt |
Q:
Deleting a key in a dictionary submission
How would I go about in deleting a specified key within a Dictionary based on the following condition?[enter image description here
Deleting a key in a dictionary
| Deleting a key in a dictionary submission | How would I go about in deleting a specified key within a Dictionary based on the following condition?[enter image description here
Deleting a key in a dictionary
| [] | [] | [
"thisdict = {\n \"brand\": \"Ford\",\n \"model\": \"Mustang\",\n \"year\": 1964\n}\ndel thisdict[\"model\"]\nprint(thisdict)\n\n"
] | [
-3
] | [
"python"
] | stackoverflow_0074678640_python.txt |
Q:
FLL Python - running two commands at once
I am helping coach First Lego League (FLL) and this year with the SPIKE robot they allow us to use Python to command the robot.
We used to be able (using Scratch style coding) to have the robot do two things at once, like drive forward and raise attachement.
But with Python everything is sequential. How could we have it do both of these at once, or send one function and not have it wait for a response before processing the next function?
A:
To run two processes at once in python, you can use the multiprocessing module to create separate processes for each command and run them simultaneously.
import multiprocessing
def run_command1():
# code for command1
def run_command2():
# code for command2
if __name__ == '__main__':
p1 = multiprocessing.Process(target=run_command1)
p2 = multiprocessing.Process(target=run_command2)
p1.start()
p2.start()
This will create two separate processes for the run_command1() and run_command2() functions, and run them simultaneously. Consult the python documentation for more information on the multiprocessing module and managing processes in python.
| FLL Python - running two commands at once | I am helping coach First Lego League (FLL) and this year with the SPIKE robot they allow us to use Python to command the robot.
We used to be able (using Scratch style coding) to have the robot do two things at once, like drive forward and raise attachement.
But with Python everything is sequential. How could we have it do both of these at once, or send one function and not have it wait for a response before processing the next function?
| [
"To run two processes at once in python, you can use the multiprocessing module to create separate processes for each command and run them simultaneously.\nimport multiprocessing\n\ndef run_command1():\n # code for command1\n\ndef run_command2():\n # code for command2\n\nif __name__ == '__main__':\n p1 = multiprocessing.Process(target=run_command1)\n p2 = multiprocessing.Process(target=run_command2)\n\n p1.start()\n p2.start()\n\nThis will create two separate processes for the run_command1() and run_command2() functions, and run them simultaneously. Consult the python documentation for more information on the multiprocessing module and managing processes in python.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074678613_python.txt |
Q:
when I go to scrapy to convert my web scraping data to csv! No matter how many rows I have. In just one row, the data of all rows is being inserted
import scrapy
from ..items import AmazondawinItem
class AmazonspiderSpider(scrapy.Spider):
name = 'amazon'
pagenumber = 3
allowed_domains = ['amazon.com']
start_urls = [
'https://www.amazon.com/s?k=laptop&i=computers&crid=27GFGJVF4KNRP&sprefix=%2Ccomputers%2C725&ref=nb_sb_ss_recent_1_0_recent'
]
def parse(self, response):
items = AmazondawinItem()
name = response.css('.a-size-medium::text').extract()
try:
old_price = response.css('.a-spacing-top-micro .a-text-price span::text').extract()
except:
old_price = None
price = response.css('.a-spacing-top-micro .a-price-whole::text').extract()
try:
review = response.css('.s-link-style .s-underline-text::text').extract()
except:
review = None
imagelink = response.css('.s-image::attr(src)').extract()
items['name'] = name
items['old_price'] = old_price
items['price'] = price
items['review'] = review
items['imagelink'] = imagelink
# description =
# ram =
# brand =
# cpu_model =
yield items
Here when I go to scrapy to convert my web scraping data to csv file or any file! No matter how many rows I have. In just one row, the data of all rows is being inserted. or import. Suppose, I have 200 rows in 1 column. But I am getting 200 rows of data in one row.
A:
It's because you're yielding all the items instead of yielding each item separately.
A not so nice solution:
import scrapy
# from ..items import AmazondawinItem
class AmazonspiderSpider(scrapy.Spider):
name = 'amazon'
pagenumber = 3
allowed_domains = ['amazon.com']
start_urls = [
'https://www.amazon.com/s?k=laptop&i=computers&crid=27GFGJVF4KNRP&sprefix=%2Ccomputers%2C725&ref=nb_sb_ss_recent_1_0_recent'
]
def parse(self, response):
# items = AmazondawinItem()
name = response.css('.a-size-medium::text').extract()
try:
old_price = response.css('.a-spacing-top-micro .a-text-price span::text').extract()
except:
old_price = None
price = response.css('.a-spacing-top-micro .a-price-whole::text').extract()
try:
review = response.css('.s-link-style .s-underline-text::text').extract()
except:
review = None
imagelink = response.css('.s-image::attr(src)').extract()
# items = dict()
# items['name'] = name
# items['old_price'] = old_price
# items['price'] = price
# items['review'] = review
# items['imagelink'] = imagelink
items = dict()
for (items['name'], items['old_price'], items['price'], items['review'], items['imagelink']) in zip(name, old_price, price, review, imagelink):
yield items
# description =
# ram =
# brand =
# cpu_model =
# yield items
A better solution:
Remove the try except, get() function will return none if no value was found. It's better not to use it in spiders anyway.
Get the items one by one.
Just replace the dict part with your item, just make sure it's inside the loop.
import scrapy
# from ..items import AmazondawinItem
class AmazonspiderSpider(scrapy.Spider):
name = 'amazon'
pagenumber = 3
allowed_domains = ['amazon.com']
start_urls = [
'https://www.amazon.com/s?k=laptop&i=computers&crid=27GFGJVF4KNRP&sprefix=%2Ccomputers%2C725&ref=nb_sb_ss_recent_1_0_recent'
]
def parse(self, response):
for row in response.css('div.s-result-list div.s-result-item.s-asin'):
# items = AmazondawinItem()
items = dict()
items['name'] = row.css('.a-size-medium::text').get()
items['old_price'] = row.css('.a-spacing-top-micro .a-text-price span::text').get()
items['price'] = response.css('.a-spacing-top-micro .a-price-whole::text').get()
items['review'] = row.css('.s-link-style .s-underline-text::text').get()
items['imagelink'] = row.css('.s-image::attr(src)').get()
yield items
# description =
# ram =
# brand =
# cpu_model =
# yield items
| when I go to scrapy to convert my web scraping data to csv! No matter how many rows I have. In just one row, the data of all rows is being inserted | import scrapy
from ..items import AmazondawinItem
class AmazonspiderSpider(scrapy.Spider):
name = 'amazon'
pagenumber = 3
allowed_domains = ['amazon.com']
start_urls = [
'https://www.amazon.com/s?k=laptop&i=computers&crid=27GFGJVF4KNRP&sprefix=%2Ccomputers%2C725&ref=nb_sb_ss_recent_1_0_recent'
]
def parse(self, response):
items = AmazondawinItem()
name = response.css('.a-size-medium::text').extract()
try:
old_price = response.css('.a-spacing-top-micro .a-text-price span::text').extract()
except:
old_price = None
price = response.css('.a-spacing-top-micro .a-price-whole::text').extract()
try:
review = response.css('.s-link-style .s-underline-text::text').extract()
except:
review = None
imagelink = response.css('.s-image::attr(src)').extract()
items['name'] = name
items['old_price'] = old_price
items['price'] = price
items['review'] = review
items['imagelink'] = imagelink
# description =
# ram =
# brand =
# cpu_model =
yield items
Here when I go to scrapy to convert my web scraping data to csv file or any file! No matter how many rows I have. In just one row, the data of all rows is being inserted. or import. Suppose, I have 200 rows in 1 column. But I am getting 200 rows of data in one row.
| [
"It's because you're yielding all the items instead of yielding each item separately.\nA not so nice solution:\nimport scrapy\n# from ..items import AmazondawinItem\n\n\nclass AmazonspiderSpider(scrapy.Spider):\n name = 'amazon'\n pagenumber = 3\n allowed_domains = ['amazon.com']\n start_urls = [\n 'https://www.amazon.com/s?k=laptop&i=computers&crid=27GFGJVF4KNRP&sprefix=%2Ccomputers%2C725&ref=nb_sb_ss_recent_1_0_recent'\n ]\n\n def parse(self, response):\n # items = AmazondawinItem()\n\n name = response.css('.a-size-medium::text').extract()\n try:\n old_price = response.css('.a-spacing-top-micro .a-text-price span::text').extract()\n except:\n old_price = None\n price = response.css('.a-spacing-top-micro .a-price-whole::text').extract()\n try:\n review = response.css('.s-link-style .s-underline-text::text').extract()\n except:\n review = None\n\n imagelink = response.css('.s-image::attr(src)').extract()\n # items = dict()\n # items['name'] = name\n # items['old_price'] = old_price\n # items['price'] = price\n # items['review'] = review\n # items['imagelink'] = imagelink\n\n items = dict()\n for (items['name'], items['old_price'], items['price'], items['review'], items['imagelink']) in zip(name, old_price, price, review, imagelink):\n yield items\n # description =\n # ram =\n # brand =\n # cpu_model =\n # yield items\n\nA better solution:\n\nRemove the try except, get() function will return none if no value was found. It's better not to use it in spiders anyway.\nGet the items one by one.\nJust replace the dict part with your item, just make sure it's inside the loop.\n\nimport scrapy\n# from ..items import AmazondawinItem\n\n\nclass AmazonspiderSpider(scrapy.Spider):\n name = 'amazon'\n pagenumber = 3\n allowed_domains = ['amazon.com']\n start_urls = [\n 'https://www.amazon.com/s?k=laptop&i=computers&crid=27GFGJVF4KNRP&sprefix=%2Ccomputers%2C725&ref=nb_sb_ss_recent_1_0_recent'\n ]\n\n def parse(self, response):\n for row in response.css('div.s-result-list div.s-result-item.s-asin'):\n # items = AmazondawinItem()\n items = dict()\n items['name'] = row.css('.a-size-medium::text').get()\n items['old_price'] = row.css('.a-spacing-top-micro .a-text-price span::text').get()\n items['price'] = response.css('.a-spacing-top-micro .a-price-whole::text').get()\n items['review'] = row.css('.s-link-style .s-underline-text::text').get()\n items['imagelink'] = row.css('.s-image::attr(src)').get()\n yield items\n # description =\n # ram =\n # brand =\n # cpu_model =\n # yield items\n\n"
] | [
0
] | [] | [] | [
"python",
"scrapy",
"web_crawler",
"web_scraping"
] | stackoverflow_0074671535_python_scrapy_web_crawler_web_scraping.txt |
Q:
How to disable Neptune callback in transformers trainer runs?
After installing Neptune.ai for occasional ML experiments logging, it became included by default into the list of callbacks in all transformers.trainer runs. As a result, it requires proper initialisation with token or else throws NeptuneMissingConfiguration error, demanding token and project name.
This is really annoying, I'd prefer Neptune callback to limit itself to warning or just have it disabled if no token is provided.
Unfortunately there is no obvious way to disable this callback, short of uninstalling Neptune.ai altogether. The doc page at https://huggingface.co/docs/transformers/main_classes/callback states that this callback is enabled by default and gives no way to disable it (unlike some other callbacks that can be disabled by environment variable).
Question: how to disable Neptune callback on per run basis?
A:
To disable Neptune callback in transformers trainer runs, you can pass the --no-neptune flag to the trainer.train() function.
trainer = Trainer(
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
no_neptune=True
)
trainer.train()
A:
Apparently this piece of code after trainer initialization helps:
for cb in trainer.callback_handler.callbacks:
if isinstance(cb, transformers.integrations.NeptuneCallback):
trainer.callback_handler.remove_callback(cb)
Still it would be good if Transformers or Neptune team provided more flexibility with this callback.
| How to disable Neptune callback in transformers trainer runs? | After installing Neptune.ai for occasional ML experiments logging, it became included by default into the list of callbacks in all transformers.trainer runs. As a result, it requires proper initialisation with token or else throws NeptuneMissingConfiguration error, demanding token and project name.
This is really annoying, I'd prefer Neptune callback to limit itself to warning or just have it disabled if no token is provided.
Unfortunately there is no obvious way to disable this callback, short of uninstalling Neptune.ai altogether. The doc page at https://huggingface.co/docs/transformers/main_classes/callback states that this callback is enabled by default and gives no way to disable it (unlike some other callbacks that can be disabled by environment variable).
Question: how to disable Neptune callback on per run basis?
| [
"To disable Neptune callback in transformers trainer runs, you can pass the --no-neptune flag to the trainer.train() function.\ntrainer = Trainer(\n model=model,\n args=args,\n train_dataset=train_dataset,\n eval_dataset=eval_dataset,\n no_neptune=True\n)\ntrainer.train()\n\n",
"Apparently this piece of code after trainer initialization helps:\nfor cb in trainer.callback_handler.callbacks:\n if isinstance(cb, transformers.integrations.NeptuneCallback):\n trainer.callback_handler.remove_callback(cb)\n\nStill it would be good if Transformers or Neptune team provided more flexibility with this callback.\n"
] | [
0,
0
] | [] | [] | [
"callback",
"huggingface_transformers",
"neptune",
"python",
"pytorch"
] | stackoverflow_0074678703_callback_huggingface_transformers_neptune_python_pytorch.txt |
Q:
Tkinter - How to disable button when window opened
I'm new to the 'Tkinter' library and I wanted to know how to disable a button when a new window has been opened. For example, if a button on the main window is clicked, a new window will open, and all buttons on the main window will be disabled. After the window is closed, the buttons should be re-enabled again.
Here's a sample of my code:
from tkinter import *
root = Tk()
def z():
w = Toplevel()
bu = Button(w, text = "Click!", font = 'bold')
bu.pack()
b = Button(root, text = "Click!", command = z)
b.pack()
root.mainloop()
Extra: I would also be grateful if someone could tell me how to close the 'root' window without closing the whole 'Tkinter' program. For example, if a secondary window is open, I would like to be able to close the first window, or at least minimize it.
A:
You can hide window
root.withdraw()
# or
root.iconify()
and show again
root.deiconify()
To disable button
b['state'] = 'disabled'
To enable button
b['state'] = 'normal'
EDIT: as @acw1668 noted in comment it needs win.protocol() to run close_second when user used closing button [X] on title bar
import tkinter as tk # PEP8: `import *` is not preferred
#--- functions ---
def close_second():
win.destroy()
b['state'] = 'normal'
root.deiconify()
def open_second():
global win
b['state'] = 'disabled'
#root.iconify()
root.withdraw()
win = tk.Toplevel()
win_b = tk.Button(win, text="Close Second", command=close_second)
win_b.pack()
# run `close_second` when user used closing button [X] on title bar
win.protocol("WM_DELETE_WINDOW", close_second)
# --- main ---
root = tk.Tk()
b = tk.Button(root, text="Open Second", command=open_second)
b.pack()
root.mainloop()
A:
You can use .grab_set() to disable interacting with the main window. You can close the window when the button is clicked and reopen the last window when the other button is clicked one is closed like so:
from tkinter import *
root = Tk()
def reopen():
root.mainloop()
w.withdraw()
def z():
w = Toplevel()
w.grab_set()
bu = Button(w, text = "Click!", font = 'bold', command=reopen)
bu.pack()
root.withdraw()
w.mainloop()
b = Button(root, text = "Click!", command = z)
b.pack()
root.mainloop()
| Tkinter - How to disable button when window opened | I'm new to the 'Tkinter' library and I wanted to know how to disable a button when a new window has been opened. For example, if a button on the main window is clicked, a new window will open, and all buttons on the main window will be disabled. After the window is closed, the buttons should be re-enabled again.
Here's a sample of my code:
from tkinter import *
root = Tk()
def z():
w = Toplevel()
bu = Button(w, text = "Click!", font = 'bold')
bu.pack()
b = Button(root, text = "Click!", command = z)
b.pack()
root.mainloop()
Extra: I would also be grateful if someone could tell me how to close the 'root' window without closing the whole 'Tkinter' program. For example, if a secondary window is open, I would like to be able to close the first window, or at least minimize it.
| [
"You can hide window\nroot.withdraw()\n\n# or \n\nroot.iconify()\n\nand show again\nroot.deiconify()\n\n\nTo disable button \nb['state'] = 'disabled' \n\nTo enable button \nb['state'] = 'normal'\n\n\nEDIT: as @acw1668 noted in comment it needs win.protocol() to run close_second when user used closing button [X] on title bar\nimport tkinter as tk # PEP8: `import *` is not preferred\n\n#--- functions ---\n\ndef close_second():\n win.destroy()\n\n b['state'] = 'normal'\n\n root.deiconify()\n\ndef open_second():\n global win\n\n b['state'] = 'disabled'\n #root.iconify()\n root.withdraw()\n\n win = tk.Toplevel()\n\n win_b = tk.Button(win, text=\"Close Second\", command=close_second)\n win_b.pack()\n\n # run `close_second` when user used closing button [X] on title bar\n win.protocol(\"WM_DELETE_WINDOW\", close_second)\n\n# --- main ---\n\nroot = tk.Tk()\n\nb = tk.Button(root, text=\"Open Second\", command=open_second)\nb.pack()\n\nroot.mainloop()\n\n",
"You can use .grab_set() to disable interacting with the main window. You can close the window when the button is clicked and reopen the last window when the other button is clicked one is closed like so:\nfrom tkinter import *\n\nroot = Tk()\n\ndef reopen():\n root.mainloop()\n w.withdraw()\n\ndef z():\n w = Toplevel()\n w.grab_set()\n bu = Button(w, text = \"Click!\", font = 'bold', command=reopen)\n bu.pack()\n root.withdraw()\n w.mainloop()\n\nb = Button(root, text = \"Click!\", command = z)\nb.pack()\n\nroot.mainloop() \n\n"
] | [
1,
0
] | [
"Welcome to Tkinter Library.\nI done why you are using that 'w' you can just use root and it work.\nfrom tkinter import *\nroot = Tk()\n\ndef z():\n\n bu = Button(root, text = \"Click!\", font = 'bold')\n bu.pack()\n\nb = Button(root, text = \"Click!\", command = z)\nb.pack()\n\nroot.mainloop()\n\nAsk me if you get any problem in python and tkinter\n"
] | [
-3
] | [
"python",
"python_3.x",
"tk_toolkit",
"tkinter"
] | stackoverflow_0060470329_python_python_3.x_tk_toolkit_tkinter.txt |
Q:
How to animate object color changing with Manim?
I want to animate Dot object to periodically change its color.
Something like this:
I've only found AnimatedBoundary class but it changes only the object's boundary (as the name says ofc).
Is there any way to achieve that with already existing tools?
A:
Maybe something like this could work for you
class ColoredDot(Scene):
def construct(self):
tracker = ValueTracker(0)
def update_color(obj):
T=tracker.get_value()
rgbcolor=[1,1-T,0+T]
m_color=rgb_to_color(rgbcolor)
upd_dot=Dot(color=m_color)
obj.become(upd_dot)
dot=Dot()
dot.add_updater(update_color)
self.add(dot)
self.play(tracker.set_value,1,run_time=5)
self.wait()
where the specific color choice is given by the line
rgbcolor=[1,1-T,0+T]
and the parameter T ranges from 0 to 1.
This gives you values for rgb colors that depend on that parameter T.
You can change this to any function of T you like to give you whatever color change you need. If you want a periodic change, use something like np.sin(T) and change the ranges of T to (0,2*pi), and I would also set the rate_func to linear at that point.
A:
This is an approximation/improvement upon @NickGreefpool's answer above, to more closely resemble the question's source animation.
The inner label is added to better see where the Circle is on the path.
class ColoredDot(Scene):
def construct(self):
tracker = ValueTracker(0)
def update_color(obj):
T = tracker.get_value()
rgbcolor = [1, 1 - T, 0 + T]
m_color = rgb_to_color(rgbcolor)
upd_dot = Dot(color=m_color, radius=0.5)
upd_dot.shift(2*DOWN)
obj.become(upd_dot)
dot = Dot()
dot.add_updater(update_color)
self.add(dot)
tracker_label = DecimalNumber(
tracker.get_value(),
color=WHITE,
num_decimal_places=8,
show_ellipsis=True
)
tracker_label.add_updater(
lambda mob: mob.set_value(tracker.get_value())
)
self.add(tracker_label)
self.play(
Rotate(dot, -360*DEGREES,
about_point=ORIGIN,
rate_func=rate_functions.smooth),
tracker.animate.set_value(1)
run_time=4
)
self.wait()
| How to animate object color changing with Manim? | I want to animate Dot object to periodically change its color.
Something like this:
I've only found AnimatedBoundary class but it changes only the object's boundary (as the name says ofc).
Is there any way to achieve that with already existing tools?
| [
"Maybe something like this could work for you\nclass ColoredDot(Scene):\n def construct(self):\n \n tracker = ValueTracker(0)\n \n def update_color(obj):\n T=tracker.get_value()\n rgbcolor=[1,1-T,0+T]\n m_color=rgb_to_color(rgbcolor)\n upd_dot=Dot(color=m_color)\n obj.become(upd_dot)\n \n dot=Dot()\n dot.add_updater(update_color)\n self.add(dot)\n \n self.play(tracker.set_value,1,run_time=5)\n self.wait()\n\nwhere the specific color choice is given by the line\nrgbcolor=[1,1-T,0+T]\n\nand the parameter T ranges from 0 to 1.\nThis gives you values for rgb colors that depend on that parameter T.\nYou can change this to any function of T you like to give you whatever color change you need. If you want a periodic change, use something like np.sin(T) and change the ranges of T to (0,2*pi), and I would also set the rate_func to linear at that point.\n",
"\nThis is an approximation/improvement upon @NickGreefpool's answer above, to more closely resemble the question's source animation.\nThe inner label is added to better see where the Circle is on the path.\nclass ColoredDot(Scene):\n def construct(self):\n\n tracker = ValueTracker(0)\n\n def update_color(obj):\n T = tracker.get_value()\n rgbcolor = [1, 1 - T, 0 + T]\n m_color = rgb_to_color(rgbcolor)\n upd_dot = Dot(color=m_color, radius=0.5)\n upd_dot.shift(2*DOWN)\n obj.become(upd_dot)\n\n dot = Dot()\n dot.add_updater(update_color)\n\n self.add(dot)\n\n tracker_label = DecimalNumber(\n tracker.get_value(),\n color=WHITE,\n num_decimal_places=8,\n show_ellipsis=True\n )\n tracker_label.add_updater(\n lambda mob: mob.set_value(tracker.get_value())\n )\n self.add(tracker_label)\n\n\n self.play(\n Rotate(dot, -360*DEGREES,\n about_point=ORIGIN,\n rate_func=rate_functions.smooth),\n tracker.animate.set_value(1)\n run_time=4\n )\n self.wait()\n\n"
] | [
2,
0
] | [] | [] | [
"colors",
"manim",
"python"
] | stackoverflow_0067693569_colors_manim_python.txt |
Q:
How to assign a "null" value from another column?
I would like to create a new column called "season_new", where I want to maintain the non-null season and extract the season for null values from the programme name. My dataframe is something like this:
programme
season
grey's anatomy s1
null
friends season 1
1
grey's anatomy s2
null
big bang theory s2
2
big bang theory
1
peaky blinders
1
I'd try using regex.
dt['season_new'] = dt['programme'].str.extract(r'(season\s?\d+|s\s?\d+)')
But it gave me this result:
programme
season
season_new
grey's anatomy s1
null
1
friends season 1
1
1
grey's anatomy s2
null
2
big bang theory s2
2
2
big bang theory
1
null
peaky blinders
1
null
The result that I expected is:
programme
season
season_new
grey's anatomy s1
null
1
friends season 1
1
1
grey's anatomy s2
null
2
big bang theory s2
2
2
big bang theory
1
1
peaky blinders
1
1
A:
When trying your code, for some reason the regex didn't return only the integers:
0 grey's anatomy s1 NaN s1
1 friends season 1 1.0 season 1
2 grey's anatomy s2 NaN s2
3 big bang theory s2 2.0 s2
4 big bang theory 1.0 NaN
5 peaky blinders 1.0 NaN
I am not so great at regex so looked into another option which is below.
df = pd.read_excel(source_file)
# Empty list for data capture
season_data = []
# Loop thought all rows
for idx in df.index:
# Grab value to check
check_val = df["season"][idx]
# If value is not null then keep it
if pd.notnull(check_val):
# Add value to list
season_data.append(int(check_val))
else:
# Extract digits from programme description
extract_result = "".join(i for i in df["programme"][idx] if i.isdigit())
# Add value to list
season_data.append(extract_result)
# Add full list to dataframe
df["season_new"] = season_data
print(df)
Result is:
programme season season_new
0 grey's anatomy s1 NaN 1
1 friends season 1 1.0 1
2 grey's anatomy s2 NaN 2
3 big bang theory s2 2.0 2
4 big bang theory 1.0 1
5 peaky blinders 1.0 1
A:
I think that the easiest way to do this is using the apply() method. I also used Regex
I first tried this, using a piece of your code:
data['season_new'] = data.apply(lambda x: x.season if pd.notna(x.season) else re.search(r'(season\s?\d+|s\s?\d+)',x.programme).group(1), axis=1)
The output was this:
programme season season_new
0 grey's anatomy s1 NaN s1
1 friends season 1 1.0 1.0
2 grey's anatomy s2 NaN s2
3 big bang theory s2 2.0 2.0
4 big bang theory 1.0 1.0
5 peaky blinders 1.0 1.0
As we can see the column season_new is not a 100% correct. So i tried in another way:
data['season_new'] = data.apply(lambda x: x.season if pd.notna(x.season) else (x.programme[-1] if x.programme[-1].isdigit() else np.nan), axis=1).astype('int')
The expected output:
programme season season_new
0 grey's anatomy s1 NaN 1
1 friends season 1 1.0 1
2 grey's anatomy s2 NaN 2
3 big bang theory s2 2.0 2
4 big bang theory 1.0 1
5 peaky blinders 1.0 1
| How to assign a "null" value from another column? | I would like to create a new column called "season_new", where I want to maintain the non-null season and extract the season for null values from the programme name. My dataframe is something like this:
programme
season
grey's anatomy s1
null
friends season 1
1
grey's anatomy s2
null
big bang theory s2
2
big bang theory
1
peaky blinders
1
I'd try using regex.
dt['season_new'] = dt['programme'].str.extract(r'(season\s?\d+|s\s?\d+)')
But it gave me this result:
programme
season
season_new
grey's anatomy s1
null
1
friends season 1
1
1
grey's anatomy s2
null
2
big bang theory s2
2
2
big bang theory
1
null
peaky blinders
1
null
The result that I expected is:
programme
season
season_new
grey's anatomy s1
null
1
friends season 1
1
1
grey's anatomy s2
null
2
big bang theory s2
2
2
big bang theory
1
1
peaky blinders
1
1
| [
"When trying your code, for some reason the regex didn't return only the integers:\n0 grey's anatomy s1 NaN s1\n1 friends season 1 1.0 season 1\n2 grey's anatomy s2 NaN s2\n3 big bang theory s2 2.0 s2\n4 big bang theory 1.0 NaN\n5 peaky blinders 1.0 NaN\n\nI am not so great at regex so looked into another option which is below.\ndf = pd.read_excel(source_file)\n\n# Empty list for data capture\nseason_data = []\n\n# Loop thought all rows\nfor idx in df.index:\n\n # Grab value to check\n check_val = df[\"season\"][idx]\n\n # If value is not null then keep it\n if pd.notnull(check_val):\n\n # Add value to list\n season_data.append(int(check_val))\n\n else:\n # Extract digits from programme description\n extract_result = \"\".join(i for i in df[\"programme\"][idx] if i.isdigit())\n\n # Add value to list\n season_data.append(extract_result)\n\n# Add full list to dataframe\ndf[\"season_new\"] = season_data\n\nprint(df)\n\nResult is:\n programme season season_new\n0 grey's anatomy s1 NaN 1\n1 friends season 1 1.0 1\n2 grey's anatomy s2 NaN 2\n3 big bang theory s2 2.0 2\n4 big bang theory 1.0 1\n5 peaky blinders 1.0 1\n\n",
"I think that the easiest way to do this is using the apply() method. I also used Regex\nI first tried this, using a piece of your code:\ndata['season_new'] = data.apply(lambda x: x.season if pd.notna(x.season) else re.search(r'(season\\s?\\d+|s\\s?\\d+)',x.programme).group(1), axis=1)\n\nThe output was this:\n programme season season_new\n0 grey's anatomy s1 NaN s1\n1 friends season 1 1.0 1.0\n2 grey's anatomy s2 NaN s2\n3 big bang theory s2 2.0 2.0\n4 big bang theory 1.0 1.0\n5 peaky blinders 1.0 1.0\n\nAs we can see the column season_new is not a 100% correct. So i tried in another way:\ndata['season_new'] = data.apply(lambda x: x.season if pd.notna(x.season) else (x.programme[-1] if x.programme[-1].isdigit() else np.nan), axis=1).astype('int')\n\nThe expected output:\n programme season season_new\n0 grey's anatomy s1 NaN 1\n1 friends season 1 1.0 1\n2 grey's anatomy s2 NaN 2\n3 big bang theory s2 2.0 2\n4 big bang theory 1.0 1\n5 peaky blinders 1.0 1\n\n"
] | [
0,
0
] | [
"You can use pandas.Series.fillna since this one accepts Series as a value.\n\nvalue: scalar, dict, Series, or DataFrame\n\nTry this :\ndt['season_new'] = (\n dt['programme']\n .str.extract(r'[season\\s?|s](\\d+)', expand=False)\n .fillna(dt['season'])\n .astype(int)\n )\n\nIf you want to remove the old season, use pandas.Series.pop :\ndt['season_new'] = (\n dt['programme']\n .str.extract(r'[season\\s?|s](\\d+)', expand=False)\n .fillna(dt.pop('season'))\n .astype(int)\n )\n\n# Output :\nprint(dt)\n\n programme season_new\n0 grey's anatomy s1 1\n1 friends season 1 1\n2 grey's anatomy s2 2\n3 big bang theory s2 2\n4 big bang theory 1\n5 peaky blinders 1\n \n\n",
"use following code:\npat = r'[season|s]\\s?(\\d+$)'\ndf.assign(season_new=df['season'].fillna(df['programme'].str.extract(pat)[0]))\n\nresult:\nprogramme season season_new\ngrey's anatomy s1 NaN 1\nfriends season 1 1 1\ngrey's anatomy s2 NaN 2\nbig bang theory s2 2 2\nbig bang theory 1 1\npeaky blinders 1 1\n\n"
] | [
-1,
-1
] | [
"pandas",
"python",
"regex"
] | stackoverflow_0074677738_pandas_python_regex.txt |
Q:
how to apply Slack app_home_opened event in Python Flask Slack App
I am currently working on Slack Event API to show the Home tab in the existed Slack App. So, I am struggling to implement app_home_opened from the Slack Event API to the app. The app is developed by Python Flask. And when I tried to show home tab in the dummy app which is not using flask, it was succeed. But I want to implement in Python Flask.
Here is the code I was succeed in my dummy app.
import os
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
...
app = App(token=os.environ.get("SLACK_BOT_TOKEN"))
...
@app.event("app_home_opened")
def update_home_tab(client, event, logger):
try:
client.views_publish(
user_id=event["user"],
view={
"type": "home",
"callback_id": "home_view",
"blocks": [
...
]
}
)
except Exception as e:
logger.error(f"Error publishing home tab: {e}")
...
if __name__ == "__main__":
SocketModeHandler(app, os.environ["SLACK_APP_TOKEN"]).start()
And I want to apply the code above to the code something like below to show the home tab.
from slack_bolt.adapter.flask import SlackRequestHandler
from flask import Flask
...
app = Flask(__name__)
...
@app.route('/', methods=['GET'])
def main():
...
@app.route('/', methods=['POST'])
def slack_events():
...
...
if __name__ == '__main__':
app.run(host='...', port=..., debug=True)
A:
Something like this should work.
import os
from slack_bolt import App
...
app = App(token=os.environ.get("SLACK_BOT_TOKEN"))
...
@app.event("app_home_opened")
def update_home_tab(client, event, logger):
try:
client.views_publish(
user_id=event["user"],
view={
"type": "home",
"callback_id": "home_view",
"blocks": [
...
]
}
)
except Exception as e:
logger.error(f"Error publishing home tab: {e}")
...
from flask import Flask, request
from slack_bolt.adapter.flask import SlackRequestHandler
flask_app = Flask(__name__)
handler = SlackRequestHandler(app)
# endpoint for handling all slack events
@flask_app.route("/slack/events", methods=["POST"])
def slack_events():
return handler.handle(request)
# run flask app
if __name__ == "__main__":
flask_app.run(debug=True,host="0.0.0.0", port=8080)
Reference - https://github.com/slackapi/bolt-python/blob/main/examples/flask/app.py
| how to apply Slack app_home_opened event in Python Flask Slack App | I am currently working on Slack Event API to show the Home tab in the existed Slack App. So, I am struggling to implement app_home_opened from the Slack Event API to the app. The app is developed by Python Flask. And when I tried to show home tab in the dummy app which is not using flask, it was succeed. But I want to implement in Python Flask.
Here is the code I was succeed in my dummy app.
import os
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
...
app = App(token=os.environ.get("SLACK_BOT_TOKEN"))
...
@app.event("app_home_opened")
def update_home_tab(client, event, logger):
try:
client.views_publish(
user_id=event["user"],
view={
"type": "home",
"callback_id": "home_view",
"blocks": [
...
]
}
)
except Exception as e:
logger.error(f"Error publishing home tab: {e}")
...
if __name__ == "__main__":
SocketModeHandler(app, os.environ["SLACK_APP_TOKEN"]).start()
And I want to apply the code above to the code something like below to show the home tab.
from slack_bolt.adapter.flask import SlackRequestHandler
from flask import Flask
...
app = Flask(__name__)
...
@app.route('/', methods=['GET'])
def main():
...
@app.route('/', methods=['POST'])
def slack_events():
...
...
if __name__ == '__main__':
app.run(host='...', port=..., debug=True)
| [
"Something like this should work.\nimport os\nfrom slack_bolt import App\n\n...\n\napp = App(token=os.environ.get(\"SLACK_BOT_TOKEN\"))\n\n...\n\n@app.event(\"app_home_opened\")\ndef update_home_tab(client, event, logger):\n try:\n client.views_publish(\n user_id=event[\"user\"],\n view={\n \"type\": \"home\",\n \"callback_id\": \"home_view\",\n \"blocks\": [\n\n ...\n\n ]\n }\n )\n \n except Exception as e:\n logger.error(f\"Error publishing home tab: {e}\")\n\n...\n\nfrom flask import Flask, request\nfrom slack_bolt.adapter.flask import SlackRequestHandler\n\nflask_app = Flask(__name__)\nhandler = SlackRequestHandler(app)\n\n# endpoint for handling all slack events\n@flask_app.route(\"/slack/events\", methods=[\"POST\"])\ndef slack_events():\n return handler.handle(request)\n\n# run flask app\nif __name__ == \"__main__\":\n flask_app.run(debug=True,host=\"0.0.0.0\", port=8080)\n\nReference - https://github.com/slackapi/bolt-python/blob/main/examples/flask/app.py\n"
] | [
0
] | [] | [] | [
"flask",
"python",
"slack",
"slack_api",
"slack_block_kit"
] | stackoverflow_0073482118_flask_python_slack_slack_api_slack_block_kit.txt |
Q:
NonUniformImage: numpy example gives 'cannot unpack non-iterable NoneType object' error 2D-Histogram
I'm trying to run this very simple example from numpy page regarding histogram2d:
https://numpy.org/doc/stable/reference/generated/numpy.histogram2d.html.
from matplotlib.image import NonUniformImage
import matplotlib.pyplot as plt
xedges = [0, 1, 3, 5]
yedges = [0, 2, 3, 4, 6]
x = np.random.normal(2, 1, 100)
y = np.random.normal(1, 1, 100)
H, xedges, yedges = np.histogram2d(x, y, bins=(xedges, yedges))
H = H.T
fig = plt.figure(figsize=(7, 3))
ax = fig.add_subplot(131, title='imshow: square bins')
plt.imshow(H, interpolation='nearest', origin='lower',extent=[xedges[0], xedges[-1], yedges[0], yedges[-1]])
ax = fig.add_subplot(132, title='pcolormesh: actual edges',aspect='equal')
X, Y = np.meshgrid(xedges, yedges)
ax.pcolormesh(X, Y, H)
ax = fig.add_subplot(133, title='NonUniformImage: interpolated',aspect='equal', xlim=xedges[[0, -1]], ylim=yedges[[0, -1]])
im = NonUniformImage(ax, interpolation='bilinear')
xcenters = (xedges[:-1] + xedges[1:]) / 2
ycenters = (yedges[:-1] + yedges[1:]) / 2
im.set_data(xcenters,ycenters,H)
ax.images.append(im)
plt.show()
By running this code as in the example, I receive the error
cannot unpack non-iterable NoneType object
This happens as soon as I run the line ax.images.append(im).
Does anyone know why this happens?
Tried to run an example from numpy website and doesn't work as expected.
A:
The full error message is:
TypeError Traceback (most recent call last)
File ~\anaconda3\lib\site-packages\IPython\core\formatters.py:339, in BaseFormatter.__call__(self, obj)
337 pass
338 else:
--> 339 return printer(obj)
340 # Finally look for special method names
341 method = get_real_method(obj, self.print_method)
File ~\anaconda3\lib\site-packages\IPython\core\pylabtools.py:151, in print_figure(fig, fmt, bbox_inches, base64, **kwargs)
148 from matplotlib.backend_bases import FigureCanvasBase
149 FigureCanvasBase(fig)
--> 151 fig.canvas.print_figure(bytes_io, **kw)
152 data = bytes_io.getvalue()
153 if fmt == 'svg':
File ~\anaconda3\lib\site-packages\matplotlib\backend_bases.py:2299, in FigureCanvasBase.print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, pad_inches, bbox_extra_artists, backend, **kwargs)
2297 if bbox_inches:
2298 if bbox_inches == "tight":
-> 2299 bbox_inches = self.figure.get_tightbbox(
2300 renderer, bbox_extra_artists=bbox_extra_artists)
2301 if pad_inches is None:
2302 pad_inches = rcParams['savefig.pad_inches']
File ~\anaconda3\lib\site-packages\matplotlib\figure.py:1632, in FigureBase.get_tightbbox(self, renderer, bbox_extra_artists)
1629 artists = bbox_extra_artists
1631 for a in artists:
-> 1632 bbox = a.get_tightbbox(renderer)
1633 if bbox is not None and (bbox.width != 0 or bbox.height != 0):
1634 bb.append(bbox)
File ~\anaconda3\lib\site-packages\matplotlib\axes\_base.py:4666, in _AxesBase.get_tightbbox(self, renderer, call_axes_locator, bbox_extra_artists, for_layout_only)
4662 if np.all(clip_extent.extents == axbbox.extents):
4663 # clip extent is inside the Axes bbox so don't check
4664 # this artist
4665 continue
-> 4666 bbox = a.get_tightbbox(renderer)
4667 if (bbox is not None
4668 and 0 < bbox.width < np.inf
4669 and 0 < bbox.height < np.inf):
4670 bb.append(bbox)
File ~\anaconda3\lib\site-packages\matplotlib\artist.py:355, in Artist.get_tightbbox(self, renderer)
340 def get_tightbbox(self, renderer):
341 """
342 Like `.Artist.get_window_extent`, but includes any clipping.
343
(...)
353 The enclosing bounding box (in figure pixel coordinates).
354 """
--> 355 bbox = self.get_window_extent(renderer)
356 if self.get_clip_on():
357 clip_box = self.get_clip_box()
File ~\anaconda3\lib\site-packages\matplotlib\image.py:943, in AxesImage.get_window_extent(self, renderer)
942 def get_window_extent(self, renderer=None):
--> 943 x0, x1, y0, y1 = self._extent
944 bbox = Bbox.from_extents([x0, y0, x1, y1])
945 return bbox.transformed(self.axes.transData)
TypeError: cannot unpack non-iterable NoneType object
<Figure size 504x216 with 3 Axes>
The error occurs deep in the append call, and appears to involve trying to get information about the plot window. If I comment out the append line, and it continues on to the plt.show(), and resulting image looks like the example, except the third image is blank.
I tested this in a Windows QtConsole; I don't know if that context posses problems for this append or not. I don't think it's a problem with your code copy.
| NonUniformImage: numpy example gives 'cannot unpack non-iterable NoneType object' error 2D-Histogram | I'm trying to run this very simple example from numpy page regarding histogram2d:
https://numpy.org/doc/stable/reference/generated/numpy.histogram2d.html.
from matplotlib.image import NonUniformImage
import matplotlib.pyplot as plt
xedges = [0, 1, 3, 5]
yedges = [0, 2, 3, 4, 6]
x = np.random.normal(2, 1, 100)
y = np.random.normal(1, 1, 100)
H, xedges, yedges = np.histogram2d(x, y, bins=(xedges, yedges))
H = H.T
fig = plt.figure(figsize=(7, 3))
ax = fig.add_subplot(131, title='imshow: square bins')
plt.imshow(H, interpolation='nearest', origin='lower',extent=[xedges[0], xedges[-1], yedges[0], yedges[-1]])
ax = fig.add_subplot(132, title='pcolormesh: actual edges',aspect='equal')
X, Y = np.meshgrid(xedges, yedges)
ax.pcolormesh(X, Y, H)
ax = fig.add_subplot(133, title='NonUniformImage: interpolated',aspect='equal', xlim=xedges[[0, -1]], ylim=yedges[[0, -1]])
im = NonUniformImage(ax, interpolation='bilinear')
xcenters = (xedges[:-1] + xedges[1:]) / 2
ycenters = (yedges[:-1] + yedges[1:]) / 2
im.set_data(xcenters,ycenters,H)
ax.images.append(im)
plt.show()
By running this code as in the example, I receive the error
cannot unpack non-iterable NoneType object
This happens as soon as I run the line ax.images.append(im).
Does anyone know why this happens?
Tried to run an example from numpy website and doesn't work as expected.
| [
"The full error message is:\nTypeError Traceback (most recent call last)\nFile ~\\anaconda3\\lib\\site-packages\\IPython\\core\\formatters.py:339, in BaseFormatter.__call__(self, obj)\n 337 pass\n 338 else:\n--> 339 return printer(obj)\n 340 # Finally look for special method names\n 341 method = get_real_method(obj, self.print_method)\n\nFile ~\\anaconda3\\lib\\site-packages\\IPython\\core\\pylabtools.py:151, in print_figure(fig, fmt, bbox_inches, base64, **kwargs)\n 148 from matplotlib.backend_bases import FigureCanvasBase\n 149 FigureCanvasBase(fig)\n--> 151 fig.canvas.print_figure(bytes_io, **kw)\n 152 data = bytes_io.getvalue()\n 153 if fmt == 'svg':\n\nFile ~\\anaconda3\\lib\\site-packages\\matplotlib\\backend_bases.py:2299, in FigureCanvasBase.print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, pad_inches, bbox_extra_artists, backend, **kwargs)\n 2297 if bbox_inches:\n 2298 if bbox_inches == \"tight\":\n-> 2299 bbox_inches = self.figure.get_tightbbox(\n 2300 renderer, bbox_extra_artists=bbox_extra_artists)\n 2301 if pad_inches is None:\n 2302 pad_inches = rcParams['savefig.pad_inches']\n\nFile ~\\anaconda3\\lib\\site-packages\\matplotlib\\figure.py:1632, in FigureBase.get_tightbbox(self, renderer, bbox_extra_artists)\n 1629 artists = bbox_extra_artists\n 1631 for a in artists:\n-> 1632 bbox = a.get_tightbbox(renderer)\n 1633 if bbox is not None and (bbox.width != 0 or bbox.height != 0):\n 1634 bb.append(bbox)\n\nFile ~\\anaconda3\\lib\\site-packages\\matplotlib\\axes\\_base.py:4666, in _AxesBase.get_tightbbox(self, renderer, call_axes_locator, bbox_extra_artists, for_layout_only)\n 4662 if np.all(clip_extent.extents == axbbox.extents):\n 4663 # clip extent is inside the Axes bbox so don't check\n 4664 # this artist\n 4665 continue\n-> 4666 bbox = a.get_tightbbox(renderer)\n 4667 if (bbox is not None\n 4668 and 0 < bbox.width < np.inf\n 4669 and 0 < bbox.height < np.inf):\n 4670 bb.append(bbox)\n\nFile ~\\anaconda3\\lib\\site-packages\\matplotlib\\artist.py:355, in Artist.get_tightbbox(self, renderer)\n 340 def get_tightbbox(self, renderer):\n 341 \"\"\"\n 342 Like `.Artist.get_window_extent`, but includes any clipping.\n 343 \n (...)\n 353 The enclosing bounding box (in figure pixel coordinates).\n 354 \"\"\"\n--> 355 bbox = self.get_window_extent(renderer)\n 356 if self.get_clip_on():\n 357 clip_box = self.get_clip_box()\n\nFile ~\\anaconda3\\lib\\site-packages\\matplotlib\\image.py:943, in AxesImage.get_window_extent(self, renderer)\n 942 def get_window_extent(self, renderer=None):\n--> 943 x0, x1, y0, y1 = self._extent\n 944 bbox = Bbox.from_extents([x0, y0, x1, y1])\n 945 return bbox.transformed(self.axes.transData)\n\nTypeError: cannot unpack non-iterable NoneType object\n<Figure size 504x216 with 3 Axes>\n\nThe error occurs deep in the append call, and appears to involve trying to get information about the plot window. If I comment out the append line, and it continues on to the plt.show(), and resulting image looks like the example, except the third image is blank.\n\nI tested this in a Windows QtConsole; I don't know if that context posses problems for this append or not. I don't think it's a problem with your code copy.\n"
] | [
0
] | [] | [] | [
"histogram2d",
"numpy",
"python"
] | stackoverflow_0074677859_histogram2d_numpy_python.txt |
Q:
Pyinstaller - Error loading Python DLL - FormatMessageW failed
I compiled my .py file running following commands:
pyinstaller myfile.py --onefile.
When i run it on my pc(Windows 10) everything works just fine.
When i try to run it on my `virtual machine(Windows 8).
I get the following error:
Error loading Python DLL
'C:\Users\MyUsername\Appdata\Local\Temp\NUMBERS\python36.dll'
LoadLibrary: PyInstaller: FormatMessageW failed.
I already googled the error and i found many solutions but none of them worked..
//UPDATE:
If i compile it with my virtual machine, everything runs fine on the virtual machine, main pc and even on my windows server.. strange.. so it must be a problem with my main pc.
Kind Regards
A:
I had a similar problem trying to run a python-based program (aws cli) and getting the "Error loading Python DLL ... LoadLibrary: The specified module could not be found." on Windows Server 2008 R2.
I solved this issue by installing the Visual C++ Redistributable for Visual Studio 2015 run-time components. https://www.microsoft.com/en-us/download/confirmation.aspx?id=48145
Hope it helps!
A:
This also happens when you read the .exe file located in build.
You need to run the exe located in dist folder.
If the error persists even on dist folder .exe , check the exact version of python, download python dll from internet for that exact version, in keep in the folder suggested by the error message (path where this dll is missing).
A:
Try to download a 32 bit version of python36.dll (or 64 if you tried 32)
That fixed the problem for me
| Pyinstaller - Error loading Python DLL - FormatMessageW failed | I compiled my .py file running following commands:
pyinstaller myfile.py --onefile.
When i run it on my pc(Windows 10) everything works just fine.
When i try to run it on my `virtual machine(Windows 8).
I get the following error:
Error loading Python DLL
'C:\Users\MyUsername\Appdata\Local\Temp\NUMBERS\python36.dll'
LoadLibrary: PyInstaller: FormatMessageW failed.
I already googled the error and i found many solutions but none of them worked..
//UPDATE:
If i compile it with my virtual machine, everything runs fine on the virtual machine, main pc and even on my windows server.. strange.. so it must be a problem with my main pc.
Kind Regards
| [
"I had a similar problem trying to run a python-based program (aws cli) and getting the \"Error loading Python DLL ... LoadLibrary: The specified module could not be found.\" on Windows Server 2008 R2. \nI solved this issue by installing the Visual C++ Redistributable for Visual Studio 2015 run-time components. https://www.microsoft.com/en-us/download/confirmation.aspx?id=48145\nHope it helps!\n",
"This also happens when you read the .exe file located in build.\nYou need to run the exe located in dist folder.\nIf the error persists even on dist folder .exe , check the exact version of python, download python dll from internet for that exact version, in keep in the folder suggested by the error message (path where this dll is missing).\n",
"Try to download a 32 bit version of python36.dll (or 64 if you tried 32)\nThat fixed the problem for me\n"
] | [
3,
0,
0
] | [
"You can use auto-py-to-exe instead:\npython -m pip install auto-py-to-exe\nAnd then wait for it to download and then write in then cmd (or terminal):\nauto-py-to-exe\nA screen will appear:\n\nAnd just make as I made in the screenshot, then press \"convert .py to .exe\" and then press \"show output folder\".\n"
] | [
-1
] | [
"pyinstaller",
"python"
] | stackoverflow_0054214600_pyinstaller_python.txt |
Q:
How Do I Check RegEx In Integer Form
I am trying to do the Advent Of Code 2022, 1st problem. (DONT TELL THE ANSWER). What i am doing is reading the file and taking each number and adding it to a sum value. What happens is, when I come across the "\n", it doesn't understand it and I am having trouble trying to create the array of sums. Can anyone help?
`
with open("input.txt") as f:
list_array = f.read().split("\n")
print(list_array)
new_array = []
sum = 0
for i in list_array:
print(i)
if i == "\n":
new_array.append(sum)
sum = 0
sum += int(str(i))
print(sum)
`
I was trying to convert to back to a str then an int, but it doesn't work
A:
Instead of checking for i == "\n", you should check i == ''. As the split based on \n will remove all the \n but left empty strings ''
And for the line sum += int(str(i)), it should be applied when i != '' only.
So the modified code should be:
with open("input.txt") as f:
list_array = f.read().split("\n")
print(list_array)
new_array = []
sum = 0
for i in list_array:
print(i)
if i == '':
new_array.append(sum)
sum = 0
else:
sum += int(str(i))
print(sum)
| How Do I Check RegEx In Integer Form | I am trying to do the Advent Of Code 2022, 1st problem. (DONT TELL THE ANSWER). What i am doing is reading the file and taking each number and adding it to a sum value. What happens is, when I come across the "\n", it doesn't understand it and I am having trouble trying to create the array of sums. Can anyone help?
`
with open("input.txt") as f:
list_array = f.read().split("\n")
print(list_array)
new_array = []
sum = 0
for i in list_array:
print(i)
if i == "\n":
new_array.append(sum)
sum = 0
sum += int(str(i))
print(sum)
`
I was trying to convert to back to a str then an int, but it doesn't work
| [
"Instead of checking for i == \"\\n\", you should check i == ''. As the split based on \\n will remove all the \\n but left empty strings ''\nAnd for the line sum += int(str(i)), it should be applied when i != '' only.\nSo the modified code should be:\nwith open(\"input.txt\") as f:\n list_array = f.read().split(\"\\n\")\n print(list_array)\n new_array = []\n sum = 0\n for i in list_array:\n print(i)\n if i == '':\n new_array.append(sum)\n sum = 0\n else: \n sum += int(str(i))\n print(sum)\n\n"
] | [
0
] | [] | [] | [
"integer",
"python",
"validation"
] | stackoverflow_0074678597_integer_python_validation.txt |
Q:
User Defined function problem (Language:Python)
Write the definition of a user defined function PushA(N) which accepts a list of names in N and pushes all those names which have letter 'A' present in it ,into a list named OnlyA. Write a program in python to input 5 names and push them one by one into a list named AllNames. The program should then use the function PushA() to create a stack of names in the list OnlyA so that it stores only those names which have the letter 'A' from the list AllNames. Pop each name from the list OnlyA and display the popped Name and when the stack is empty display the message "EMPTY".
For example:
If the names input and pushed into the list AllNames are
['AARON','PENNY','TALON,'JOY']
Then stack OnlyA should store
['AARON','TALON']
And the output should be displayed as
AARON PENNY TALON JOY
I was unable to come up with a code for the above problem
| User Defined function problem (Language:Python) | Write the definition of a user defined function PushA(N) which accepts a list of names in N and pushes all those names which have letter 'A' present in it ,into a list named OnlyA. Write a program in python to input 5 names and push them one by one into a list named AllNames. The program should then use the function PushA() to create a stack of names in the list OnlyA so that it stores only those names which have the letter 'A' from the list AllNames. Pop each name from the list OnlyA and display the popped Name and when the stack is empty display the message "EMPTY".
For example:
If the names input and pushed into the list AllNames are
['AARON','PENNY','TALON,'JOY']
Then stack OnlyA should store
['AARON','TALON']
And the output should be displayed as
AARON PENNY TALON JOY
I was unable to come up with a code for the above problem
| [] | [] | [
"# Function to push names with 'A' into a list named OnlyA\n\ndef PushA(N):\n\n OnlyA = []\n\n for name in N:\n\n if 'A' in name.upper():\n\n OnlyA.append(name)\n\n return OnlyA\n\n# Main program\n\nAllNames = []\n\n"
] | [
-2
] | [
"list",
"python",
"stack",
"user_defined_functions"
] | stackoverflow_0074678774_list_python_stack_user_defined_functions.txt |
Q:
Python Flask Apache wsgi Not Wrking
EDIT-1 - I was having import issues running the app with mod_wsgi and the command line, but I have resolved those. I still can't get the mod_wsgi part to work, as detailed below.
EDIT-2 Now the mod_wsgi is loading the login page, but the sqlite db is complaining. And one import either works for mod_wsgi, or the command line invocation, depending on how it is written.
EDIT-3 Fixed the sqlite error. Needed to have the full path to the db file in the factory. I still have the import issue as described below.
I have a flask application (my first) in a rocket_launcher_flask/rocket_launcher and I can't seem to get the both the command line invocation and the wsgi connection to work with the same code base. The errors occur in two places.
In the factory function in __init__.py I have this import:
#from . import rocket_launcher # works for command line launch
import rocket_launcher # works for wsgi
app.register_blueprint(rocket_launcher.bp)
If I just use the import rocket_launcher and run from the command line, I get this error:
File "/home/mark/python-projects/rocket_launcher_flask/rocket_launcher/__init__.py", line 67, in create_app
app.register_blueprint(rocket_launcher.bp)
AttributeError: module 'rocket_launcher' has no attribute 'bp'
But, if you look at the rocket_launcher.py file, bp is defined at the top of the file (complete file shown below):
from flask import (
Blueprint, flash, g, redirect, render_template, request, session, url_for, make_response
)
bp = Blueprint('rocket_launcher', __name__)
If I run the app using wsgi, the app works as expected.
However, if I change the import to
from . import rocket_launcher # works for command line launch
#import rocket_launcher # works for wsgi
app.register_blueprint(rocket_launcher.bp)
and run from the command line, there are no errors and the app works as designed with no other code changes.
However, running the app using this import and using wsgi yields this error:
Sat Dec 03 16:11:59.368196 2022] [wsgi:error] [pid 1297960:tid 140355496306432] [client 192.168.25.15:57682] File "/home/mark/python-projects/rocket_launcher_flask/rocket_launcher/__init__.py", line 65, in create_app, referer: http://192.168.25.15/
[Sat Dec 03 16:11:59.368201 2022] [wsgi:error] [pid 1297960:tid 140355496306432] [client 192.168.25.15:57682] from . import rocket_launcher, referer: http://192.168.25.15/
[Sat Dec 03 16:11:59.368218 2022] [wsgi:error] [pid 1297960:tid 140355496306432] [client 192.168.25.15:57682] ImportError: attempted relative import with no known parent package, referer: http://192.168.25.15/
The app uses this invocation for the command line:
#!/bin/bash
export FLASK_APP=rocket_launcher
export FLASK_ENV=development
python -m flask run --host=0.0.0.0
My application has the following structure:
├── rocket_launcher_flask
├── instance
├── run.sh -- script (above) to run app from CLI
├── rocket_launcher.sqlite
├── rocket_launcher
├── auth.py
├── db.py
├── fsm.py
├── hardware.py
├── __init__.py
├── model.py
├── rocket_launcher_flask.wsgi
├── rocket_launcher.py
├── schema.sql
├── static
├── templates
│ ├── <lots of templates>
My new and improved rocket_launcher_flask.wsgi file:
#! /home/mark/.virtualenvs/rocket_launcher_flask/bin/python
import logging
import sys
logging.basicConfig(stream=sys.stderr)
sys.path.insert(0, '/home/mark/python-projects/rocket_launcher_flask/rocket_launcher')
#sys.path.insert(0, '/home/mark/.virtualenvs/rocket_launcher_flask/lib/python3.8/site-packages/')
#sys.path.insert(0, '/home/mark/.virtualenvs/rocket_launcher_flask/bin/python')
activate_this = '/home/mark/.virtualenvs/rocket_launcher_flask/bin/activate_this.py'
with open(activate_this) as file_:
exec(file_.read(), dict(__file__=activate_this))
logging.error("path=%s" % sys.path)
from __init__ import create_app
application = create_app()
My __init__.py:
import os
from datetime import datetime
from flask import Flask, redirect, url_for
from flask_wtf.csrf import CSRFProtect
global launcher
import sys
sys.path.insert(0, '/home/mark/python-projects/rocket_launcher_flask/rocket_launcher')
sys.path.insert(0, '/home/mark/.virtualenvs/rocket_launcher_flask/lib/python3.8/site-packages/')
sys.path.insert(0, '/home/mark/.virtualenvs/rocket_launcher_flask/bin/python')
print("sys.path=%s" % sys.path)
def create_app(test_config=None):
# create and configure the app
app = Flask(__name__, instance_relative_config=False)
app.config.from_mapping(
DEBUG=False,
SECRET_KEY=os.urandom(42), #b'\xc29\xe7\x98@\xc3\x12~\xde3\xed\nP\x1e\x8f\xcd', #created from python -c 'import os; print(os.urandom(16))'
DATABASE=os.path.join(app.instance_path, '/full/path/to/rocket_launcher.sqlite'),
)
if 'WINGDB_ACTIVE' in os.environ:
app.debug = False
csrf = CSRFProtect()
csrf.init_app(app)
if test_config is None:
# load the instance config, if it exists, when not testing
app.config.from_pyfile('config.py', silent=True)
else:
# load the test config if passed in
app.config.from_mapping(test_config)
# ensure the instance folder exists
try:
os.makedirs(app.instance_path)
except OSError:
pass
@app.context_processor
def get_copyright_years():
start = '2021'
now = datetime.utcnow()
end = str(now.year)
return {'copyright_years': "%s - %s" % (start, end)}
# a simple page that says hello
@app.route('/hello')
def hello():
return 'Hello, World!'
from . import db
db.init_app(app)
from . import auth
app.register_blueprint(auth.bp)
app.add_url_rule('/', endpoint='auth.login')
#import rocket_launcher # works for wsgi
from . import rocket_launcher # works for command line
app.register_blueprint(rocket_launcher.bp)
from . import model
app.register_blueprint(model.bp)
from . import fsm
app.register_blueprint(fsm.bp)
return app
My rocket_launcher.py - I removed most of the function bodies to keep things easier to read.
import functools
from fsm import launcher
import hardware
import logging
from flask import (
Blueprint, flash, g, redirect, render_template, request, session, url_for, make_response
)
from werkzeug.exceptions import abort
from auth import login_required, logout
from db import get_db
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
bp = Blueprint('rocket_launcher', __name__)
def create_safe_response(endpoint=None, template=None, **data):
@bp.route('/connected', methods=('GET', 'POST'))
@login_required
def connected():
@bp.route('/armed', methods=('GET', 'POST'))
@login_required
#@valid_key_required
def armed():
@bp.route('/ignition', methods=('GET', 'POST'))
@login_required
#@valid_key_required
#@launch_initiated
def ignition():
@bp.route('/ignition_timer_done', methods=('GET', 'POST'))
@login_required
def ignition_timer_done():
@bp.route('/admin', methods=('GET', 'POST'))
@login_required
def admin():
def get_continuity(test=False):
Thanks for any suggestions you may have to fix my import problems!
A:
I am not sure this is the answer, but it is a workable solution.
I changed the name of the the file rocket_launcher.py to controller.py and changed the appropriate references to rocket_launcher in the code to controller.
I then changed the import in __init__.py to `import controller'.
And voila, it works for both wsgi and command line invocations!
I guess there were too many rocket_launcher references that pointed to different things, and python got confused?? A much more knowledgeable Pythonista will have to answer that question.
| Python Flask Apache wsgi Not Wrking | EDIT-1 - I was having import issues running the app with mod_wsgi and the command line, but I have resolved those. I still can't get the mod_wsgi part to work, as detailed below.
EDIT-2 Now the mod_wsgi is loading the login page, but the sqlite db is complaining. And one import either works for mod_wsgi, or the command line invocation, depending on how it is written.
EDIT-3 Fixed the sqlite error. Needed to have the full path to the db file in the factory. I still have the import issue as described below.
I have a flask application (my first) in a rocket_launcher_flask/rocket_launcher and I can't seem to get the both the command line invocation and the wsgi connection to work with the same code base. The errors occur in two places.
In the factory function in __init__.py I have this import:
#from . import rocket_launcher # works for command line launch
import rocket_launcher # works for wsgi
app.register_blueprint(rocket_launcher.bp)
If I just use the import rocket_launcher and run from the command line, I get this error:
File "/home/mark/python-projects/rocket_launcher_flask/rocket_launcher/__init__.py", line 67, in create_app
app.register_blueprint(rocket_launcher.bp)
AttributeError: module 'rocket_launcher' has no attribute 'bp'
But, if you look at the rocket_launcher.py file, bp is defined at the top of the file (complete file shown below):
from flask import (
Blueprint, flash, g, redirect, render_template, request, session, url_for, make_response
)
bp = Blueprint('rocket_launcher', __name__)
If I run the app using wsgi, the app works as expected.
However, if I change the import to
from . import rocket_launcher # works for command line launch
#import rocket_launcher # works for wsgi
app.register_blueprint(rocket_launcher.bp)
and run from the command line, there are no errors and the app works as designed with no other code changes.
However, running the app using this import and using wsgi yields this error:
Sat Dec 03 16:11:59.368196 2022] [wsgi:error] [pid 1297960:tid 140355496306432] [client 192.168.25.15:57682] File "/home/mark/python-projects/rocket_launcher_flask/rocket_launcher/__init__.py", line 65, in create_app, referer: http://192.168.25.15/
[Sat Dec 03 16:11:59.368201 2022] [wsgi:error] [pid 1297960:tid 140355496306432] [client 192.168.25.15:57682] from . import rocket_launcher, referer: http://192.168.25.15/
[Sat Dec 03 16:11:59.368218 2022] [wsgi:error] [pid 1297960:tid 140355496306432] [client 192.168.25.15:57682] ImportError: attempted relative import with no known parent package, referer: http://192.168.25.15/
The app uses this invocation for the command line:
#!/bin/bash
export FLASK_APP=rocket_launcher
export FLASK_ENV=development
python -m flask run --host=0.0.0.0
My application has the following structure:
├── rocket_launcher_flask
├── instance
├── run.sh -- script (above) to run app from CLI
├── rocket_launcher.sqlite
├── rocket_launcher
├── auth.py
├── db.py
├── fsm.py
├── hardware.py
├── __init__.py
├── model.py
├── rocket_launcher_flask.wsgi
├── rocket_launcher.py
├── schema.sql
├── static
├── templates
│ ├── <lots of templates>
My new and improved rocket_launcher_flask.wsgi file:
#! /home/mark/.virtualenvs/rocket_launcher_flask/bin/python
import logging
import sys
logging.basicConfig(stream=sys.stderr)
sys.path.insert(0, '/home/mark/python-projects/rocket_launcher_flask/rocket_launcher')
#sys.path.insert(0, '/home/mark/.virtualenvs/rocket_launcher_flask/lib/python3.8/site-packages/')
#sys.path.insert(0, '/home/mark/.virtualenvs/rocket_launcher_flask/bin/python')
activate_this = '/home/mark/.virtualenvs/rocket_launcher_flask/bin/activate_this.py'
with open(activate_this) as file_:
exec(file_.read(), dict(__file__=activate_this))
logging.error("path=%s" % sys.path)
from __init__ import create_app
application = create_app()
My __init__.py:
import os
from datetime import datetime
from flask import Flask, redirect, url_for
from flask_wtf.csrf import CSRFProtect
global launcher
import sys
sys.path.insert(0, '/home/mark/python-projects/rocket_launcher_flask/rocket_launcher')
sys.path.insert(0, '/home/mark/.virtualenvs/rocket_launcher_flask/lib/python3.8/site-packages/')
sys.path.insert(0, '/home/mark/.virtualenvs/rocket_launcher_flask/bin/python')
print("sys.path=%s" % sys.path)
def create_app(test_config=None):
# create and configure the app
app = Flask(__name__, instance_relative_config=False)
app.config.from_mapping(
DEBUG=False,
SECRET_KEY=os.urandom(42), #b'\xc29\xe7\x98@\xc3\x12~\xde3\xed\nP\x1e\x8f\xcd', #created from python -c 'import os; print(os.urandom(16))'
DATABASE=os.path.join(app.instance_path, '/full/path/to/rocket_launcher.sqlite'),
)
if 'WINGDB_ACTIVE' in os.environ:
app.debug = False
csrf = CSRFProtect()
csrf.init_app(app)
if test_config is None:
# load the instance config, if it exists, when not testing
app.config.from_pyfile('config.py', silent=True)
else:
# load the test config if passed in
app.config.from_mapping(test_config)
# ensure the instance folder exists
try:
os.makedirs(app.instance_path)
except OSError:
pass
@app.context_processor
def get_copyright_years():
start = '2021'
now = datetime.utcnow()
end = str(now.year)
return {'copyright_years': "%s - %s" % (start, end)}
# a simple page that says hello
@app.route('/hello')
def hello():
return 'Hello, World!'
from . import db
db.init_app(app)
from . import auth
app.register_blueprint(auth.bp)
app.add_url_rule('/', endpoint='auth.login')
#import rocket_launcher # works for wsgi
from . import rocket_launcher # works for command line
app.register_blueprint(rocket_launcher.bp)
from . import model
app.register_blueprint(model.bp)
from . import fsm
app.register_blueprint(fsm.bp)
return app
My rocket_launcher.py - I removed most of the function bodies to keep things easier to read.
import functools
from fsm import launcher
import hardware
import logging
from flask import (
Blueprint, flash, g, redirect, render_template, request, session, url_for, make_response
)
from werkzeug.exceptions import abort
from auth import login_required, logout
from db import get_db
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
bp = Blueprint('rocket_launcher', __name__)
def create_safe_response(endpoint=None, template=None, **data):
@bp.route('/connected', methods=('GET', 'POST'))
@login_required
def connected():
@bp.route('/armed', methods=('GET', 'POST'))
@login_required
#@valid_key_required
def armed():
@bp.route('/ignition', methods=('GET', 'POST'))
@login_required
#@valid_key_required
#@launch_initiated
def ignition():
@bp.route('/ignition_timer_done', methods=('GET', 'POST'))
@login_required
def ignition_timer_done():
@bp.route('/admin', methods=('GET', 'POST'))
@login_required
def admin():
def get_continuity(test=False):
Thanks for any suggestions you may have to fix my import problems!
| [
"I am not sure this is the answer, but it is a workable solution.\nI changed the name of the the file rocket_launcher.py to controller.py and changed the appropriate references to rocket_launcher in the code to controller.\nI then changed the import in __init__.py to `import controller'.\nAnd voila, it works for both wsgi and command line invocations!\nI guess there were too many rocket_launcher references that pointed to different things, and python got confused?? A much more knowledgeable Pythonista will have to answer that question.\n"
] | [
0
] | [] | [] | [
"flask",
"mod_wsgi",
"python"
] | stackoverflow_0074668973_flask_mod_wsgi_python.txt |
Q:
When I swap two values in my list it is not a permanent swap in the list?
I'm new to Python, and I have an assignment where I have a frogs vs. toads game. They are in a list and they need to swap places one step at a time.
the list looks like this: ["F","F","F"," ","T","T","T"]
and should look like ["T","T","T"," ","F","F","F"] to win the game. The user inputs From and To and they swap. But my code is not taking the swapped code as the new code when the new From and To are being entered. How do I fix this?
This is all within a while loop as there are other options at the beginning of the game.
Also, one of the rules for the assignment is that the frogs are only allowed to one one direction to the left and the toads vice versa. if anyone knows how to put that into my code that would be very much appreciated.
Here's my code:
elif choice== 'P':
position= ["1","2","3","4","5","6","7"]
frogsandtoads= ["F","F","F"," ","T","T","T"]
print("Position: ",position)
print("Lilypad: ",frogsandtoads)
def swappositions(frogsandtoads, pos1, pos2):
if pos1== 'e':
exit()
if pos1== 'E':
exit()
frogsandtoads[pos1], frogsandtoads[pos2] = frogsandtoads[pos2], frogsandtoads[pos1]#the swapping of the positions
return frogsandtoads
pos1 = fromplace= int(input("From: "))
pos2 = toplace= int(input("To: "))
print(swappositions(frogsandtoads, pos1-1, pos2-1))
frogsandtoads=swappositions(frogsandtoads, pos1-1, pos2-1)
if frogsandtoads== ["T","T","T"," ","F","F","F"]: #this is what is not working
break
this is my outcome when I run the code:
Please choose an option: p
Position: ['1', '2', '3', '4', '5', '6', '7']
Lilypad: ['F', 'F', 'F', ' ', 'T', 'T', 'T']
From: 3
To: 4
['F', 'F', ' ', 'F', 'T', 'T', 'T']
Position: ['1', '2', '3', '4', '5', '6', '7']
Lilypad: ['F', 'F', 'F', ' ', 'T', 'T', 'T']
From:
As you can see I don't know how to make it so the lilypad changes with the input of the from and to the second time round.
A:
So you want it such that the lilypad list does not revert back to ['F', 'F', 'F', ' ', 'T', 'T', 'T'] and that the position list also won't revert back to ['1', '2', '3', '4', '5', '6', '7']?
The problem in your code is that you reset the variables here:
position= ["1","2","3","4","5","6","7"]
frogsandtoads= ["F","F","F"," ","T","T","T"]
If you do not want those lists to get reset to that then you should move those two lines out of your game loop aka outside of your while loop. This should fix the problem. You also have the stepping problem such that the frogs can only go to the left and the toads to the right, but I believe that you can implement this yourself. Feel free to ask questions.
| When I swap two values in my list it is not a permanent swap in the list? | I'm new to Python, and I have an assignment where I have a frogs vs. toads game. They are in a list and they need to swap places one step at a time.
the list looks like this: ["F","F","F"," ","T","T","T"]
and should look like ["T","T","T"," ","F","F","F"] to win the game. The user inputs From and To and they swap. But my code is not taking the swapped code as the new code when the new From and To are being entered. How do I fix this?
This is all within a while loop as there are other options at the beginning of the game.
Also, one of the rules for the assignment is that the frogs are only allowed to one one direction to the left and the toads vice versa. if anyone knows how to put that into my code that would be very much appreciated.
Here's my code:
elif choice== 'P':
position= ["1","2","3","4","5","6","7"]
frogsandtoads= ["F","F","F"," ","T","T","T"]
print("Position: ",position)
print("Lilypad: ",frogsandtoads)
def swappositions(frogsandtoads, pos1, pos2):
if pos1== 'e':
exit()
if pos1== 'E':
exit()
frogsandtoads[pos1], frogsandtoads[pos2] = frogsandtoads[pos2], frogsandtoads[pos1]#the swapping of the positions
return frogsandtoads
pos1 = fromplace= int(input("From: "))
pos2 = toplace= int(input("To: "))
print(swappositions(frogsandtoads, pos1-1, pos2-1))
frogsandtoads=swappositions(frogsandtoads, pos1-1, pos2-1)
if frogsandtoads== ["T","T","T"," ","F","F","F"]: #this is what is not working
break
this is my outcome when I run the code:
Please choose an option: p
Position: ['1', '2', '3', '4', '5', '6', '7']
Lilypad: ['F', 'F', 'F', ' ', 'T', 'T', 'T']
From: 3
To: 4
['F', 'F', ' ', 'F', 'T', 'T', 'T']
Position: ['1', '2', '3', '4', '5', '6', '7']
Lilypad: ['F', 'F', 'F', ' ', 'T', 'T', 'T']
From:
As you can see I don't know how to make it so the lilypad changes with the input of the from and to the second time round.
| [
"So you want it such that the lilypad list does not revert back to ['F', 'F', 'F', ' ', 'T', 'T', 'T'] and that the position list also won't revert back to ['1', '2', '3', '4', '5', '6', '7']?\nThe problem in your code is that you reset the variables here:\n position= [\"1\",\"2\",\"3\",\"4\",\"5\",\"6\",\"7\"]\n frogsandtoads= [\"F\",\"F\",\"F\",\" \",\"T\",\"T\",\"T\"]\n\nIf you do not want those lists to get reset to that then you should move those two lines out of your game loop aka outside of your while loop. This should fix the problem. You also have the stepping problem such that the frogs can only go to the left and the toads to the right, but I believe that you can implement this yourself. Feel free to ask questions.\n"
] | [
0
] | [] | [] | [
"list",
"python"
] | stackoverflow_0074672043_list_python.txt |
Q:
Select, move and deselect a card from Solitaire game pygame
I want to select a card with the mouse (the card changes to another image of a card with orange edges), move it (or not) and later deselect the card clicking again, returning it to the original image of the card (without orange edges).
I made the two first steps, but I can't find a way to deselect the card.
for event in pygame.event.get():
if event.type==pygame.QUIT:
pygame.quit()
sys.exit()
if event.type==pygame.MOUSEBUTTONDOWN:
if event.button==1 and mouse_rect.colliderect(card_rect):
card = pygame.image.load("1c2.png").convert_alpha()
card = pygame.transform.scale(card, (99, 100))
if event.button == 1 and not mouse_rect.colliderect(card_rect):
n = pygame.mouse.get_pos()
x = n[0]
y = n[1]
card_rect.centerx = x
card_rect.centery = y
if event.button==1 and mouse_rect.colliderect(card_rect) and card_rect.width==99:
card = pygame.image.load("1c.png").convert_alpha()
card = pygame.transform.scale(card, (100, 100))
Original image:1c.png
Image selected (with orange edges):1c2.png
I try to change a little the width of the card when you select it, and after using that in the last conditional that you can see above.
I also tried (in the last conditional too):
if event.button==1 and mouse_rect.colliderect(card_rect) and card==pygame.image.load("1c2.png").convert_alpha():
card = pygame.image.load("1c.png").convert_alpha()
card = pygame.transform.scale(card, (100, 100))
What can I do to fix it?
Thanks!
Wrong result: The card stays at the selected image (card with orange borders).
A:
Do not load the images in the application loop. Load the images before the application loop. Use a Boolean variable (card_selected) to indicate if the map is selected. Invert the state when clicking on the card (card_selected = not card_selected):
card_1 = pygame.image.load("1c.png").convert_alpha()
card_1 = pygame.transform.scale(card_1, (100, 100))
card_2 = pygame.image.load("1c2.png").convert_alpha()
card_2 = pygame.transform.scale(card_2, (100, 100))
card = card_1
card_selected = False
# [...]
run = True
while run:
for event in pygame.event.get():
if event.type==pygame.QUIT:
run = False
if event.type==pygame.MOUSEBUTTONDOWN:
if event.button == 1 and card_rect.collidepoint(event.pos):
card_selected = not card_selected
card = card_2 if card_selected else card_1
# [...]
pygame.quit()
sys.exit()
Do not forget to clear the display in every frame. The entire scene must be redrawn in each frame.
| Select, move and deselect a card from Solitaire game pygame | I want to select a card with the mouse (the card changes to another image of a card with orange edges), move it (or not) and later deselect the card clicking again, returning it to the original image of the card (without orange edges).
I made the two first steps, but I can't find a way to deselect the card.
for event in pygame.event.get():
if event.type==pygame.QUIT:
pygame.quit()
sys.exit()
if event.type==pygame.MOUSEBUTTONDOWN:
if event.button==1 and mouse_rect.colliderect(card_rect):
card = pygame.image.load("1c2.png").convert_alpha()
card = pygame.transform.scale(card, (99, 100))
if event.button == 1 and not mouse_rect.colliderect(card_rect):
n = pygame.mouse.get_pos()
x = n[0]
y = n[1]
card_rect.centerx = x
card_rect.centery = y
if event.button==1 and mouse_rect.colliderect(card_rect) and card_rect.width==99:
card = pygame.image.load("1c.png").convert_alpha()
card = pygame.transform.scale(card, (100, 100))
Original image:1c.png
Image selected (with orange edges):1c2.png
I try to change a little the width of the card when you select it, and after using that in the last conditional that you can see above.
I also tried (in the last conditional too):
if event.button==1 and mouse_rect.colliderect(card_rect) and card==pygame.image.load("1c2.png").convert_alpha():
card = pygame.image.load("1c.png").convert_alpha()
card = pygame.transform.scale(card, (100, 100))
What can I do to fix it?
Thanks!
Wrong result: The card stays at the selected image (card with orange borders).
| [
"Do not load the images in the application loop. Load the images before the application loop. Use a Boolean variable (card_selected) to indicate if the map is selected. Invert the state when clicking on the card (card_selected = not card_selected):\ncard_1 = pygame.image.load(\"1c.png\").convert_alpha()\ncard_1 = pygame.transform.scale(card_1, (100, 100))\ncard_2 = pygame.image.load(\"1c2.png\").convert_alpha()\ncard_2 = pygame.transform.scale(card_2, (100, 100))\ncard = card_1\ncard_selected = False\n\n# [...]\n\nrun = True\nwhile run:\n for event in pygame.event.get():\n if event.type==pygame.QUIT:\n run = False\n\n if event.type==pygame.MOUSEBUTTONDOWN:\n if event.button == 1 and card_rect.collidepoint(event.pos):\n card_selected = not card_selected\n card = card_2 if card_selected else card_1\n\n # [...]\n\npygame.quit()\nsys.exit()\n\nDo not forget to clear the display in every frame. The entire scene must be redrawn in each frame.\n"
] | [
0
] | [] | [] | [
"pygame",
"python"
] | stackoverflow_0074678805_pygame_python.txt |
Q:
Python interact with OS cmd
I would like to know if it was possible via the OS module to iterate on several lines with the command prompt
Here is an example of what I would have liked to do but which does not work (non-persistent session):
from os import popen, system, getlogin
system(f'cd C:/Users/{getlogin()}')
print(popen('pip freeze'))
A:
I tested this on Windows and it worked with check_output from subprocess, using cmd /C to execute both commands and exit
from os import getlogin
from subprocess import check_output
cmd_str = fr'cmd.exe /C "cd C:\Users\{getlogin()} && pip freeze"'
output = check_output(cmd_str, shell=True).decode()
for line in output.split('\r\n'):
print(line)
output:
absl-py==1.3.0
aiohttp==3.7.3
altgraph==0.17
astroid==2.4.2
astunparse==1.6.3
.....
A:
It is not possible to use the popen or system functions from the os module to iterate over multiple lines with the command prompt. These functions are designed to run a single command and return the output or result as a string. In order to iterate over multiple lines with the command prompt, you would need to use a different approach, such as spawning a new shell process using the subprocess module and then using the stdin and stdout streams to interact with the command prompt. Here is an example of how you might do this:
import subprocess
from os import getlogin
# Spawn a new shell process
p = subprocess.Popen(['cmd'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Change the working directory to the user's home directory
p.stdin.write(f'cd C:/Users/{getlogin()}\n')
# Run the pip freeze command
p.stdin.write('pip freeze\n')
# Read the output of the command
output = p.stdout.read()
# Print the output to the console
print(output)
In this example, the subprocess module is used to spawn a new shell process and create stdin and stdout streams for interacting with the command prompt. The cd command is then used to change the working directory, and the pip freeze command is run. The output of the command is read from the stdout stream and printed to the console.
Hope this helps!
| Python interact with OS cmd | I would like to know if it was possible via the OS module to iterate on several lines with the command prompt
Here is an example of what I would have liked to do but which does not work (non-persistent session):
from os import popen, system, getlogin
system(f'cd C:/Users/{getlogin()}')
print(popen('pip freeze'))
| [
"I tested this on Windows and it worked with check_output from subprocess, using cmd /C to execute both commands and exit\nfrom os import getlogin\nfrom subprocess import check_output\n\ncmd_str = fr'cmd.exe /C \"cd C:\\Users\\{getlogin()} && pip freeze\"'\n\noutput = check_output(cmd_str, shell=True).decode()\nfor line in output.split('\\r\\n'):\n print(line)\n\noutput:\nabsl-py==1.3.0\naiohttp==3.7.3\naltgraph==0.17\nastroid==2.4.2\nastunparse==1.6.3\n.....\n\n",
"It is not possible to use the popen or system functions from the os module to iterate over multiple lines with the command prompt. These functions are designed to run a single command and return the output or result as a string. In order to iterate over multiple lines with the command prompt, you would need to use a different approach, such as spawning a new shell process using the subprocess module and then using the stdin and stdout streams to interact with the command prompt. Here is an example of how you might do this:\nimport subprocess\nfrom os import getlogin\n\n# Spawn a new shell process\np = subprocess.Popen(['cmd'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)\n\n# Change the working directory to the user's home directory\np.stdin.write(f'cd C:/Users/{getlogin()}\\n')\n\n# Run the pip freeze command\np.stdin.write('pip freeze\\n')\n\n# Read the output of the command\noutput = p.stdout.read()\n\n# Print the output to the console\nprint(output)\n\nIn this example, the subprocess module is used to spawn a new shell process and create stdin and stdout streams for interacting with the command prompt. The cd command is then used to change the working directory, and the pip freeze command is run. The output of the command is read from the stdout stream and printed to the console.\nHope this helps!\n"
] | [
1,
0
] | [] | [] | [
"cmd",
"python",
"python_os"
] | stackoverflow_0074677640_cmd_python_python_os.txt |
Q:
Does sonar cloud support decorative messages for Python in a GitHub PR with workflow?
I used the generic workflow https://github.com/SonarSource/sonarcloud-github-action-samples/blob/generic/.github/workflows/build.yml as suggested in the docs.
But I'm not receiving the message from the bot in the repository, code is analyzed just fine and I can see it in the website.
workflow
name: CI
on:
push:
branches: [ "main", "develop" ]
pull_request:
branches: [ "main", "develop" ]
types: [opened, synchronize, reopened]
workflow_dispatch:
permissions:
pull-requests: read # allows SonarCloud to decorate PRs with analysis results
jobs:
sonar_cloud_report:
runs-on: ubuntu-latest
steps:
- uses : actions/checkout@v3
- uses: ./.github/actions/dependencies
- uses: SonarSource/sonarcloud-github-action@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} # Generate a token on Sonarcloud.io, add it to the secrets of this repo with the name SONAR_TOKEN (Settings > Secrets > Actions > add new repository secret)
with:
# Additional arguments for the sonarcloud scanner
args:
# Unique keys of your project and organization. You can find them in SonarCloud > Information (bottom-left menu)
# mandatory
-Dsonar.projectKey=XXX
-Dsonar.organization=XXX
-Dsonar.scm.provider=git
# Comma-separated paths to directories containing main source files.
#-Dsonar.sources= # optional, default is project base directory
# When you need the analysis to take place in a directory other than the one from which it was launched
#-Dsonar.projectBaseDir= # optional, default is .
# Comma-separated paths to directories containing test source files.
#-Dsonar.tests= # optional. For more info about Code Coverage, please refer to https://docs.sonarcloud.io/enriching/test-coverage/overview/
# Adds more detail to both client and server-side analysis logs, activating DEBUG mode for the scanner, and adding client-side environment variables and system properties to the server-side log of analysis report processing.
#-Dsonar.verbose= # optional, default is false
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information, if any
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
sonar-project.properties
sonar.projectKey=XXX
sonar.organization=XXX
sonar.verbose=true
sonar.python.coverage.reportPaths=coverage.xml
# This is the name and version displayed in the SonarCloud UI.
#sonar.projectName=XXX
#sonar.projectVersion=1.0
# Path is relative to the sonar-project.properties file. Replace "\" by "/" on Windows.
#sonar.sources=.
# Encoding of the source code. Default is default system encoding
#sonar.sourceEncoding=UTF-8
A:
Yes, SonarCloud does support decorating pull requests with analysis results for Python projects. In order for this to work, your GitHub workflow needs to include the pull-requests permission and use the SonarSource/sonarcloud-github-action action to run the SonarCloud analysis.
Based on the information you provided, it looks like your workflow is correctly set up to run the analysis and decorate pull requests. However, there are a few things you can check to troubleshoot this issue:
Make sure that your SonarCloud organization and project keys are correct in the sonar-project.properties file and the args section of the workflow.
Check if your repository has the pull-requests permission enabled. You can do this by going to the repository settings in GitHub and looking for the "SonarCloud" section under "Permissions".
Verify that the SONAR_TOKEN secret is correctly set up in your repository. You can do this by going to the repository settings in GitHub, selecting the "Secrets" tab, and looking for the SONAR_TOKEN secret.
If you are running the analysis on a pull request, make sure that the pull request is targeting the correct branch (e.g. main or develop) and that the sonar.projectKey and sonar.organization properties are correct in the sonar-project.properties file.
If you are still having trouble with this, you may want to check the logs of the SonarCloud Scan step in your workflow to see if there are any error messages or other information that can help you troubleshoot the issue. You can also contact the SonarCloud support team for further assistance.
A:
To receive a message from the bot in your repository, you need to set up webhooks in your repository.
Go to your repository on GitHub and click on "Settings".
In the left menu, click on "Webhooks".
Click on the "Add webhook" button.
In the "Payload URL" field, enter the URL of the webhook. This is typically in the format https:/// (e.g. https://mycompany.com/webhooks).
In the "Content type" field, select the appropriate content type (e.g. "application/json").
In the "Secret" field, enter the secret token that you want to use to verify the authenticity of the webhook request.
In the "Which events would you like to trigger this webhook?" section, select the events that you want to trigger the webhook (e.g. "Push", "Pull request").
Click on the "Add webhook" button to save the webhook.
After you have set up the webhook, the bot should be able to send messages to your repository. You may need to configure your bot to use the webhook URL and secret that you provided in the previous steps.
| Does sonar cloud support decorative messages for Python in a GitHub PR with workflow? | I used the generic workflow https://github.com/SonarSource/sonarcloud-github-action-samples/blob/generic/.github/workflows/build.yml as suggested in the docs.
But I'm not receiving the message from the bot in the repository, code is analyzed just fine and I can see it in the website.
workflow
name: CI
on:
push:
branches: [ "main", "develop" ]
pull_request:
branches: [ "main", "develop" ]
types: [opened, synchronize, reopened]
workflow_dispatch:
permissions:
pull-requests: read # allows SonarCloud to decorate PRs with analysis results
jobs:
sonar_cloud_report:
runs-on: ubuntu-latest
steps:
- uses : actions/checkout@v3
- uses: ./.github/actions/dependencies
- uses: SonarSource/sonarcloud-github-action@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} # Generate a token on Sonarcloud.io, add it to the secrets of this repo with the name SONAR_TOKEN (Settings > Secrets > Actions > add new repository secret)
with:
# Additional arguments for the sonarcloud scanner
args:
# Unique keys of your project and organization. You can find them in SonarCloud > Information (bottom-left menu)
# mandatory
-Dsonar.projectKey=XXX
-Dsonar.organization=XXX
-Dsonar.scm.provider=git
# Comma-separated paths to directories containing main source files.
#-Dsonar.sources= # optional, default is project base directory
# When you need the analysis to take place in a directory other than the one from which it was launched
#-Dsonar.projectBaseDir= # optional, default is .
# Comma-separated paths to directories containing test source files.
#-Dsonar.tests= # optional. For more info about Code Coverage, please refer to https://docs.sonarcloud.io/enriching/test-coverage/overview/
# Adds more detail to both client and server-side analysis logs, activating DEBUG mode for the scanner, and adding client-side environment variables and system properties to the server-side log of analysis report processing.
#-Dsonar.verbose= # optional, default is false
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information, if any
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
sonar-project.properties
sonar.projectKey=XXX
sonar.organization=XXX
sonar.verbose=true
sonar.python.coverage.reportPaths=coverage.xml
# This is the name and version displayed in the SonarCloud UI.
#sonar.projectName=XXX
#sonar.projectVersion=1.0
# Path is relative to the sonar-project.properties file. Replace "\" by "/" on Windows.
#sonar.sources=.
# Encoding of the source code. Default is default system encoding
#sonar.sourceEncoding=UTF-8
| [
"Yes, SonarCloud does support decorating pull requests with analysis results for Python projects. In order for this to work, your GitHub workflow needs to include the pull-requests permission and use the SonarSource/sonarcloud-github-action action to run the SonarCloud analysis.\nBased on the information you provided, it looks like your workflow is correctly set up to run the analysis and decorate pull requests. However, there are a few things you can check to troubleshoot this issue:\nMake sure that your SonarCloud organization and project keys are correct in the sonar-project.properties file and the args section of the workflow.\nCheck if your repository has the pull-requests permission enabled. You can do this by going to the repository settings in GitHub and looking for the \"SonarCloud\" section under \"Permissions\".\nVerify that the SONAR_TOKEN secret is correctly set up in your repository. You can do this by going to the repository settings in GitHub, selecting the \"Secrets\" tab, and looking for the SONAR_TOKEN secret.\nIf you are running the analysis on a pull request, make sure that the pull request is targeting the correct branch (e.g. main or develop) and that the sonar.projectKey and sonar.organization properties are correct in the sonar-project.properties file.\nIf you are still having trouble with this, you may want to check the logs of the SonarCloud Scan step in your workflow to see if there are any error messages or other information that can help you troubleshoot the issue. You can also contact the SonarCloud support team for further assistance.\n",
"To receive a message from the bot in your repository, you need to set up webhooks in your repository.\n\nGo to your repository on GitHub and click on \"Settings\".\nIn the left menu, click on \"Webhooks\".\nClick on the \"Add webhook\" button.\nIn the \"Payload URL\" field, enter the URL of the webhook. This is typically in the format https:/// (e.g. https://mycompany.com/webhooks).\nIn the \"Content type\" field, select the appropriate content type (e.g. \"application/json\").\nIn the \"Secret\" field, enter the secret token that you want to use to verify the authenticity of the webhook request.\nIn the \"Which events would you like to trigger this webhook?\" section, select the events that you want to trigger the webhook (e.g. \"Push\", \"Pull request\").\nClick on the \"Add webhook\" button to save the webhook.\n\nAfter you have set up the webhook, the bot should be able to send messages to your repository. You may need to configure your bot to use the webhook URL and secret that you provided in the previous steps.\n"
] | [
1,
0
] | [] | [] | [
"github",
"github_actions",
"python",
"sonarcloud",
"workflow"
] | stackoverflow_0074559957_github_github_actions_python_sonarcloud_workflow.txt |
Q:
Django waitress- How to run it in Daemon Mode
I've a django application with waitress (gunicorn doesn't work on windows) to serve it. Because its production code and its based on windows 2012 server. But I want the django application to run in daemon mode is it possible?
Daemon mode - app running without command prompt visible also I'll be helpful to open shell without even closing the server. AutoStart if for some reason system has to restart.
Note:
Restrictions: The project cannot be moved to UNIX based system.
Third Party application like any .exe file cannot be used.
A:
For production:
create a file server.py at same level as manage.py and add following:
from waitress import serve
from myapp.wsgi import application
if __name__ == '__main__':
serve(application, port='8000')
Start-Process python -NoNewWindow -ArgumentList "server.py"
You can close the terminal after that and it still runs.
If you later want to stop, you have to do with Get-Process and then
TaskKill
Running with CMD:
START "myapp" /B python server.py
Running in cmd
A:
Just add & at the end of command
A:
Another solution - to use Docker, and in your docker you can use gunicorn or any other linux feature
A:
It is possible to run a Django application in daemon mode on a Windows server using the built-in Windows Services feature. Here are the steps to do this:
Open the Command Prompt as an administrator.
Navigate to the directory where your Django application is located.
Use the following command to install the application as a Windows service:
python manage.py installservice
Use the following command to start the service:
net start <service name>
Where is the name of the service that was installed in step 3.
Once the service is started, it will run in the background as a daemon and will continue to run even if the command prompt is closed or the user logs out.
To make the service start automatically whenever the system is restarted, you can use the following command:
sc config <service name> start=auto
Where is the name of the service that was installed in step 3.
To open a shell for the Django application without stopping the service, you can use the following command:
python manage.py shell
This will open a Python shell with access to the Django application, allowing you to run commands and interact with the application.
| Django waitress- How to run it in Daemon Mode | I've a django application with waitress (gunicorn doesn't work on windows) to serve it. Because its production code and its based on windows 2012 server. But I want the django application to run in daemon mode is it possible?
Daemon mode - app running without command prompt visible also I'll be helpful to open shell without even closing the server. AutoStart if for some reason system has to restart.
Note:
Restrictions: The project cannot be moved to UNIX based system.
Third Party application like any .exe file cannot be used.
| [
"For production:\ncreate a file server.py at same level as manage.py and add following:\nfrom waitress import serve\n \nfrom myapp.wsgi import application\n \nif __name__ == '__main__':\n serve(application, port='8000')\n\nStart-Process python -NoNewWindow -ArgumentList \"server.py\"\nYou can close the terminal after that and it still runs.\nIf you later want to stop, you have to do with Get-Process and then\nTaskKill\nRunning with CMD:\nSTART \"myapp\" /B python server.py\nRunning in cmd\n",
"Just add & at the end of command\n",
"Another solution - to use Docker, and in your docker you can use gunicorn or any other linux feature\n",
"It is possible to run a Django application in daemon mode on a Windows server using the built-in Windows Services feature. Here are the steps to do this:\n\nOpen the Command Prompt as an administrator.\nNavigate to the directory where your Django application is located.\nUse the following command to install the application as a Windows service:\n\npython manage.py installservice\n\nUse the following command to start the service:\nnet start <service name>\n\nWhere is the name of the service that was installed in step 3.\nOnce the service is started, it will run in the background as a daemon and will continue to run even if the command prompt is closed or the user logs out.\nTo make the service start automatically whenever the system is restarted, you can use the following command:\nsc config <service name> start=auto\n\nWhere is the name of the service that was installed in step 3.\nTo open a shell for the Django application without stopping the service, you can use the following command:\npython manage.py shell\n\nThis will open a Python shell with access to the Django application, allowing you to run commands and interact with the application.\n"
] | [
2,
0,
0,
0
] | [] | [] | [
"daemon",
"django",
"python",
"waitress",
"windows"
] | stackoverflow_0074574627_daemon_django_python_waitress_windows.txt |
Q:
Apache Beam Python SDK on Flink Runner to use Snowflake IO
I have an existing Flink cluster in k8s.
I am using Flink's session mode.
I want to set up a periodic ETL job from Snowflake using Apache Beam. Thus, I have tried to use from apache_beam.io.snowflake import ReadFromSnowflake. I am aware that an expansion service is required, but here is where I am struggling.
I have set up Python worker, and a Bean flink runner job server (version 2.43.0) as a sidecar in k8s pod.
I have passed these pipeline options.
[
"--runner=FlinkRunner",
"--flink_version=1.15",
"--flink_master=http://{host}:8081",
"--environment_type=EXTERNAL",
"--environment_config=localhost:50000",
"--flink_submit_uber_jar",
]
However, I am seeing the following logs.
2022/12/03 11:28:26 Failed to retrieve staged files: failed to retrieve /tmp/1-1/staged in 3 attempts: failed to retrieve chunk for /tmp/1-1/staged/beam-sdks-java-io-snowflake-expansion-service-2.43.0-4Bf-1AqvnTnIQ0PSXTz9S5MQ4RrIvMC3eNLcBf99voU.jar
caused by:
rpc error: code = Canceled desc = Server sendMessage() failed with Error; failed to retrieve chunk for /tmp/1-1/staged/beam-sdks-java-io-snowflake-expansion-service-2.43.0-4Bf-1AqvnTnIQ0PSXTz9S5MQ4RrIvMC3eNLcBf99voU.jar
caused by:
rpc error: code = Canceled desc = Server sendMessage() failed with Error; failed to retrieve chunk for /tmp/1-1/staged/beam-sdks-java-io-snowflake-expansion-service-2.43.0-4Bf-1AqvnTnIQ0PSXTz9S5MQ4RrIvMC3eNLcBf99voU.jar
caused by:
rpc error: code = Internal desc = unexpected EOF; failed to retrieve chunk for /tmp/1-1/staged/beam-sdks-java-io-snowflake-expansion-service-2.43.0-4Bf-1AqvnTnIQ0PSXTz9S5MQ4RrIvMC3eNLcBf99voU.jar
caused by:
rpc error: code = Internal desc = unexpected EOF
I have started the expansion service using beam-sdks-java-io-snowflake-expansion-service-2.43.0.jar. I am unsure why the service isn't retrieved.
Any help?
A:
It seems that you are using FlinkRunner, and it means that it will use non-portable Flink.
I have checked with a committer that works on portability (@chamikara) and he suggested using the PortableRunner if you need portability.
| Apache Beam Python SDK on Flink Runner to use Snowflake IO | I have an existing Flink cluster in k8s.
I am using Flink's session mode.
I want to set up a periodic ETL job from Snowflake using Apache Beam. Thus, I have tried to use from apache_beam.io.snowflake import ReadFromSnowflake. I am aware that an expansion service is required, but here is where I am struggling.
I have set up Python worker, and a Bean flink runner job server (version 2.43.0) as a sidecar in k8s pod.
I have passed these pipeline options.
[
"--runner=FlinkRunner",
"--flink_version=1.15",
"--flink_master=http://{host}:8081",
"--environment_type=EXTERNAL",
"--environment_config=localhost:50000",
"--flink_submit_uber_jar",
]
However, I am seeing the following logs.
2022/12/03 11:28:26 Failed to retrieve staged files: failed to retrieve /tmp/1-1/staged in 3 attempts: failed to retrieve chunk for /tmp/1-1/staged/beam-sdks-java-io-snowflake-expansion-service-2.43.0-4Bf-1AqvnTnIQ0PSXTz9S5MQ4RrIvMC3eNLcBf99voU.jar
caused by:
rpc error: code = Canceled desc = Server sendMessage() failed with Error; failed to retrieve chunk for /tmp/1-1/staged/beam-sdks-java-io-snowflake-expansion-service-2.43.0-4Bf-1AqvnTnIQ0PSXTz9S5MQ4RrIvMC3eNLcBf99voU.jar
caused by:
rpc error: code = Canceled desc = Server sendMessage() failed with Error; failed to retrieve chunk for /tmp/1-1/staged/beam-sdks-java-io-snowflake-expansion-service-2.43.0-4Bf-1AqvnTnIQ0PSXTz9S5MQ4RrIvMC3eNLcBf99voU.jar
caused by:
rpc error: code = Internal desc = unexpected EOF; failed to retrieve chunk for /tmp/1-1/staged/beam-sdks-java-io-snowflake-expansion-service-2.43.0-4Bf-1AqvnTnIQ0PSXTz9S5MQ4RrIvMC3eNLcBf99voU.jar
caused by:
rpc error: code = Internal desc = unexpected EOF
I have started the expansion service using beam-sdks-java-io-snowflake-expansion-service-2.43.0.jar. I am unsure why the service isn't retrieved.
Any help?
| [
"It seems that you are using FlinkRunner, and it means that it will use non-portable Flink.\nI have checked with a committer that works on portability (@chamikara) and he suggested using the PortableRunner if you need portability.\n"
] | [
0
] | [] | [] | [
"apache_beam",
"apache_flink",
"python",
"snowflake_cloud_data_platform"
] | stackoverflow_0074666364_apache_beam_apache_flink_python_snowflake_cloud_data_platform.txt |
Q:
Cloud Computing Heuristic (Greedy) and Genetic algorithms for task scheduling
Hello Everyone if somebody give me a hand by solving this coding (C++, Python) problem for cloud computing task scheduling by Heuristic (Greedy) and Genetic algorithms I have no clue how to write the code I have surfed through Google to find a code inspire me tackle the problem:
the proble is:
Problem: Task Scheduling in Cloud Computing
Suppose you are given N tasks and M virtual machines (VMs). Each task has a specified size
(unit: million instructions) and deadline (unit: seconds). Each VM also has a main characteristic called
processing speed (unit: million instructions per second). The task scheduling problem can be defined
as follows:
Mapping tasks to VMs in such a way that the completion time of the last task, i.e., Makespan, is
optimized while meeting the deadline of the tasks. If the deadline of a task is not met, that task must
be rejected.
Solve and implement this problem using the following algorithms:
A) An efficient heuristic algorithm
b) Genetic Algorithm (GA)
Finally, put the results of these two algorithms into an Excel file so that we can compare them
with each other. In order to increase the reliability of the results, please run each algorithm 10 times
and report the average, maximum and minimum values.
Choose the size of the tasks randomly from the range [1000-1000].
The deadline of the tasks should be chosen randomly from the interval [10-60].
The processing speed of virtual machines should be randomly selected from the range [2000-8000].
First scenario: consider the number of tasks from 50 to 300 with steps of 50 and the number of VMs
equal to 15.
Second scenario: Consider the number of VMs from 5 to 30 with steps of 5 and the number of tasks
equal to 200.
For the genetic algorithm, consider the initial population size equal to 20 and the number of
iterations equal to 100.
I have no clue where I have to start!
A:
Here is a simple outline for implementing the task scheduling problem using heuristic and genetic algorithms in C++ and Python.
For the heuristic algorithm, you can start by creating a class or struct to represent each task and VM. This class or struct should have fields for the task size, deadline, and processing speed of the VM.
Next, you can implement the heuristic algorithm itself. This will involve mapping tasks to VMs in such a way that the completion time of the last task (makespan) is optimized while meeting the deadline of the tasks. One possible approach is to sort the tasks by deadline and then assign them to the fastest VMs until they are all mapped. If a task cannot be mapped before its deadline, it must be rejected.
To evaluate the performance of the heuristic algorithm, you can run it multiple times and keep track of the average, maximum, and minimum makespan values. You can also print these values to an Excel file for comparison with the results of the genetic algorithm.
For the genetic algorithm, you can start by implementing a function to generate a random initial population of task-to-VM mappings. This function should create a set of mappings that meet the deadlines of the tasks and have a reasonable makespan.
Next, you can implement the genetic algorithm itself. This will involve applying genetic operators (such as crossover and mutation) to the population of mappings to produce new, improved mappings. You can use a fitness function to evaluate the quality of each mapping and select the best ones for reproduction.
As with the heuristic algorithm, you can evaluate the performance of the genetic algorithm by running it multiple times and keeping track of the average, maximum, and minimum makespan values. You can then print these values to an Excel file for comparison with the results of the heuristic algorithm.
Overall, this problem can be challenging, but it is a good opportunity to learn about algorithms for task scheduling and optimization. I hope this outline helps you get started!
| Cloud Computing Heuristic (Greedy) and Genetic algorithms for task scheduling | Hello Everyone if somebody give me a hand by solving this coding (C++, Python) problem for cloud computing task scheduling by Heuristic (Greedy) and Genetic algorithms I have no clue how to write the code I have surfed through Google to find a code inspire me tackle the problem:
the proble is:
Problem: Task Scheduling in Cloud Computing
Suppose you are given N tasks and M virtual machines (VMs). Each task has a specified size
(unit: million instructions) and deadline (unit: seconds). Each VM also has a main characteristic called
processing speed (unit: million instructions per second). The task scheduling problem can be defined
as follows:
Mapping tasks to VMs in such a way that the completion time of the last task, i.e., Makespan, is
optimized while meeting the deadline of the tasks. If the deadline of a task is not met, that task must
be rejected.
Solve and implement this problem using the following algorithms:
A) An efficient heuristic algorithm
b) Genetic Algorithm (GA)
Finally, put the results of these two algorithms into an Excel file so that we can compare them
with each other. In order to increase the reliability of the results, please run each algorithm 10 times
and report the average, maximum and minimum values.
Choose the size of the tasks randomly from the range [1000-1000].
The deadline of the tasks should be chosen randomly from the interval [10-60].
The processing speed of virtual machines should be randomly selected from the range [2000-8000].
First scenario: consider the number of tasks from 50 to 300 with steps of 50 and the number of VMs
equal to 15.
Second scenario: Consider the number of VMs from 5 to 30 with steps of 5 and the number of tasks
equal to 200.
For the genetic algorithm, consider the initial population size equal to 20 and the number of
iterations equal to 100.
I have no clue where I have to start!
| [
"Here is a simple outline for implementing the task scheduling problem using heuristic and genetic algorithms in C++ and Python.\nFor the heuristic algorithm, you can start by creating a class or struct to represent each task and VM. This class or struct should have fields for the task size, deadline, and processing speed of the VM.\nNext, you can implement the heuristic algorithm itself. This will involve mapping tasks to VMs in such a way that the completion time of the last task (makespan) is optimized while meeting the deadline of the tasks. One possible approach is to sort the tasks by deadline and then assign them to the fastest VMs until they are all mapped. If a task cannot be mapped before its deadline, it must be rejected.\nTo evaluate the performance of the heuristic algorithm, you can run it multiple times and keep track of the average, maximum, and minimum makespan values. You can also print these values to an Excel file for comparison with the results of the genetic algorithm.\nFor the genetic algorithm, you can start by implementing a function to generate a random initial population of task-to-VM mappings. This function should create a set of mappings that meet the deadlines of the tasks and have a reasonable makespan.\nNext, you can implement the genetic algorithm itself. This will involve applying genetic operators (such as crossover and mutation) to the population of mappings to produce new, improved mappings. You can use a fitness function to evaluate the quality of each mapping and select the best ones for reproduction.\nAs with the heuristic algorithm, you can evaluate the performance of the genetic algorithm by running it multiple times and keeping track of the average, maximum, and minimum makespan values. You can then print these values to an Excel file for comparison with the results of the heuristic algorithm.\nOverall, this problem can be challenging, but it is a good opportunity to learn about algorithms for task scheduling and optimization. I hope this outline helps you get started!\n"
] | [
0
] | [] | [] | [
"c++",
"cloud",
"genetic_algorithm",
"greedy",
"python"
] | stackoverflow_0074678875_c++_cloud_genetic_algorithm_greedy_python.txt |
Q:
How to generate arbitrary high dimensional connectivity structures for scipy.ndimage.label
I have some high dimensional boolean data, in this example an array with 4 dimensions, but this is arbitrary:
X.shape
(3, 2, 66, 241)
I want to group the dataset into connected regions of True values, which can be done with scipy.ndimage.label, with the aid of a connectivity structure which says which points in the array should be considered to touch. The default 2-D structure is a cross:
[[0,1,0],
[1,1,1],
[0,1,0]]
Which can be easily extended to high dimensions if all those dimensions are connected. However I want to programmatically generate such a structure where I have a list of which dims are connected to which:
#We want to find connections across dims 2 and 3 across each slice of dims 0 and 1:
dim_connections=[[0],[1],[2,3]]
#Now we want two separate connected subspaces in our data:
dim_connections=[[0,1],[2,3]]
For individual cases I can work out with hard-thinking how to generate the correct structuring element, but I am struggling to work out the general rule! For clarity I want something like:
mystructure=construct_arbitrary_structure(ndim, dim_connections)
the_correct_result=scipy.ndimage.label(X,structure=my_structure)
A:
The key to constructing an arbitrary structure for scipy.ndimage.label is to understand the concept of a neighborhood. A neighborhood is a set of points in the data that are considered to be connected. For example, in a 2D array, the neighborhood of a point (x,y) is the set of points {(x-1,y-1), (x-1,y), (x-1,y+1), (x,y-1), (x,y), (x,y+1), (x+1,y-1), (x+1,y), (x+1,y+1)}.
In order to construct an arbitrary structure for scipy.ndimage.label, we need to define a neighborhood for each point in the data. To do this, we need to define a set of connections between the dimensions of the data. For example, if we have a 4D array, and we want to connect dimensions 0 and 1, and dimensions 2 and 3, then our set of connections would be [[0,1],[2,3]].
Once we have defined our set of connections, we can construct our structure tensor. The structure tensor is a 3D array, where the first two dimensions correspond to the dimensions of the data, and the third dimension corresponds to the connections between the dimensions. For example, if we have a 4D array, and we want to connect dimensions 0 and 1, and dimensions 2 and 3, then our structure tensor would be of size (4,4,2).
The structure tensor is constructed by setting the elements of the third dimension to 1 if the corresponding dimensions are connected, and 0 otherwise. For example, if we have a 4D array, and we want to connect dimensions 0 and 1, and dimensions 2 and 3, then our structure tensor would be:
[[[1, 0],
[0, 0],
[0, 1],
[0, 0]],
[[0, 0],
[1, 0],
[0, 1],
[0, 0]],
[[0, 1],
[0, 0],
[1, 0],
[0, 0]],
[[0, 0],
[0, 0],
[0, 1],
[1, 0]]]
Once we have constructed our structure tensor, we can pass it to scipy.ndimage.label to generate the connected regions of our data.
A:
This should work for you
def construct_arbitrary_structure(ndim, dim_connections):
#Create structure array
structure = np.zeros([3] * ndim, dtype=int)
#Fill structure array
for d in dim_connections:
if len(d) > 1:
# Set the connection between multiple dimensions
for i in range(ndim):
# Create a unit vector
u = np.zeros(ndim, dtype=int)
u[i] = 1
# Create a mask by adding the connection between multiple dimensions
M = np.zeros([3] * ndim, dtype=int)
for j in d:
M += np.roll(u, j)
structure += M
else:
# Set the connection for one dimension
u = np.zeros(ndim, dtype=int)
u[d[0]] = 1
structure += u
#Make sure it's symmetric
for i in range(ndim):
structure += np.roll(structure, 1, axis=i)
return structure
A:
o construct the arbitrary structure with the desired connectivity, you can use a loop to iterate over the dimensions and create a 3-D structure that is the default 2-D structure in the dimensions that need to be connected, and a 1-D structure (i.e. a line) in the other dimensions. For example:
import numpy as np
from scipy import ndimage
# Create the default 2-D structure
default_structure = ndimage.generate_binary_structure(2, 1)
def construct_arbitrary_structure(ndim, dim_connections):
# Create an empty structure
structure = np.zeros((3,)*ndim, dtype=np.int8)
for dims in dim_connections:
# Loop over the dimensions that need to be connected
for d in dims:
# If the dimension is the first or last in the list, use a 1-D line
# Otherwise, use the default 2-D structure
if d == dims[0]:
structure[(slice(None),)*d + (1,)] = 1
elif d == dims[-1]:
structure[(slice(None),)*d + (1,)] = 1
else:
structure[(slice(None),)*d + (slice(None),)] = default_structure
return structure
Example usage
dim_connections = [[0],[1],[2,3]]
my_structure = construct_arbitrary_structure(4, dim_connections)
print(my_structure)
To use the structure with scipy.ndimage.label to find connected regions in your data, you can use the structure parameter to specify the structure you want to use for the labeling. For example:
import numpy as np
from scipy import ndimage
# Generate some random 4-dimensional boolean data
X = np.random.random((3, 2, 66, 241)) > 0.5
# Define the structure you want to use
dim_connections = [[0, 1], [2, 3]]
my_structure = construct_arbitrary_structure(X.ndim, dim_connections)
# Use scipy.ndimage.label to find connected regions in your data
labeled_regions, num_labels = ndimage.label(X, structure=my_structure)
| How to generate arbitrary high dimensional connectivity structures for scipy.ndimage.label | I have some high dimensional boolean data, in this example an array with 4 dimensions, but this is arbitrary:
X.shape
(3, 2, 66, 241)
I want to group the dataset into connected regions of True values, which can be done with scipy.ndimage.label, with the aid of a connectivity structure which says which points in the array should be considered to touch. The default 2-D structure is a cross:
[[0,1,0],
[1,1,1],
[0,1,0]]
Which can be easily extended to high dimensions if all those dimensions are connected. However I want to programmatically generate such a structure where I have a list of which dims are connected to which:
#We want to find connections across dims 2 and 3 across each slice of dims 0 and 1:
dim_connections=[[0],[1],[2,3]]
#Now we want two separate connected subspaces in our data:
dim_connections=[[0,1],[2,3]]
For individual cases I can work out with hard-thinking how to generate the correct structuring element, but I am struggling to work out the general rule! For clarity I want something like:
mystructure=construct_arbitrary_structure(ndim, dim_connections)
the_correct_result=scipy.ndimage.label(X,structure=my_structure)
| [
"The key to constructing an arbitrary structure for scipy.ndimage.label is to understand the concept of a neighborhood. A neighborhood is a set of points in the data that are considered to be connected. For example, in a 2D array, the neighborhood of a point (x,y) is the set of points {(x-1,y-1), (x-1,y), (x-1,y+1), (x,y-1), (x,y), (x,y+1), (x+1,y-1), (x+1,y), (x+1,y+1)}.\nIn order to construct an arbitrary structure for scipy.ndimage.label, we need to define a neighborhood for each point in the data. To do this, we need to define a set of connections between the dimensions of the data. For example, if we have a 4D array, and we want to connect dimensions 0 and 1, and dimensions 2 and 3, then our set of connections would be [[0,1],[2,3]].\nOnce we have defined our set of connections, we can construct our structure tensor. The structure tensor is a 3D array, where the first two dimensions correspond to the dimensions of the data, and the third dimension corresponds to the connections between the dimensions. For example, if we have a 4D array, and we want to connect dimensions 0 and 1, and dimensions 2 and 3, then our structure tensor would be of size (4,4,2).\nThe structure tensor is constructed by setting the elements of the third dimension to 1 if the corresponding dimensions are connected, and 0 otherwise. For example, if we have a 4D array, and we want to connect dimensions 0 and 1, and dimensions 2 and 3, then our structure tensor would be:\n[[[1, 0],\n [0, 0],\n [0, 1],\n [0, 0]],\n \n [[0, 0],\n [1, 0],\n [0, 1],\n [0, 0]],\n \n [[0, 1],\n [0, 0],\n [1, 0],\n [0, 0]],\n \n [[0, 0],\n [0, 0],\n [0, 1],\n [1, 0]]]\n\nOnce we have constructed our structure tensor, we can pass it to scipy.ndimage.label to generate the connected regions of our data.\n",
"This should work for you\n\ndef construct_arbitrary_structure(ndim, dim_connections):\n #Create structure array\n structure = np.zeros([3] * ndim, dtype=int)\n\n #Fill structure array\n for d in dim_connections:\n if len(d) > 1:\n # Set the connection between multiple dimensions\n for i in range(ndim):\n # Create a unit vector\n u = np.zeros(ndim, dtype=int)\n u[i] = 1\n\n # Create a mask by adding the connection between multiple dimensions\n M = np.zeros([3] * ndim, dtype=int)\n for j in d:\n M += np.roll(u, j)\n structure += M\n else:\n # Set the connection for one dimension\n u = np.zeros(ndim, dtype=int)\n u[d[0]] = 1\n structure += u\n\n #Make sure it's symmetric\n for i in range(ndim):\n structure += np.roll(structure, 1, axis=i)\n\n return structure\n\n",
"o construct the arbitrary structure with the desired connectivity, you can use a loop to iterate over the dimensions and create a 3-D structure that is the default 2-D structure in the dimensions that need to be connected, and a 1-D structure (i.e. a line) in the other dimensions. For example:\nimport numpy as np\nfrom scipy import ndimage\n\n# Create the default 2-D structure\ndefault_structure = ndimage.generate_binary_structure(2, 1)\n\ndef construct_arbitrary_structure(ndim, dim_connections):\n# Create an empty structure\n structure = np.zeros((3,)*ndim, dtype=np.int8)\n for dims in dim_connections:\n # Loop over the dimensions that need to be connected\n for d in dims:\n # If the dimension is the first or last in the list, use a 1-D line\n # Otherwise, use the default 2-D structure\n if d == dims[0]:\n structure[(slice(None),)*d + (1,)] = 1\n elif d == dims[-1]:\n structure[(slice(None),)*d + (1,)] = 1\n else:\n structure[(slice(None),)*d + (slice(None),)] = default_structure\n return structure\n\nExample usage\ndim_connections = [[0],[1],[2,3]]\nmy_structure = construct_arbitrary_structure(4, dim_connections)\nprint(my_structure)\n\nTo use the structure with scipy.ndimage.label to find connected regions in your data, you can use the structure parameter to specify the structure you want to use for the labeling. For example:\nimport numpy as np\nfrom scipy import ndimage\n\n# Generate some random 4-dimensional boolean data\nX = np.random.random((3, 2, 66, 241)) > 0.5\n\n# Define the structure you want to use\ndim_connections = [[0, 1], [2, 3]]\nmy_structure = construct_arbitrary_structure(X.ndim, dim_connections)\n\n# Use scipy.ndimage.label to find connected regions in your data\nlabeled_regions, num_labels = ndimage.label(X, structure=my_structure)\n\n"
] | [
0,
0,
0
] | [] | [] | [
"image_processing",
"ndimage",
"python",
"scipy"
] | stackoverflow_0074564292_image_processing_ndimage_python_scipy.txt |
Q:
PyQt5: How to install/run Qt Designer
Feeling really stupid, right now, but the title says it all:
How do you start the QtDesigner?
I've installed PyQt5 via pip and I believe to have identified the directory it's been installed in as
C:\Users\%username%\AppData\Local\Programs\Python\Python36\Lib\site-packages\PyQt5
Now what? There are a lot of .pyd files, some .dll's, too, but nothing executable (well, except a QtWebEngineProcess.exe in ...\site-packages\PyQt5\Qt\bin, but that doesn't sound like what I'm looking for.
A:
I struggled with this as well. The pyqt5-tools approach is cumbersome so I created a standalone installer for Qt Designer. It's only 40 MB. Maybe you will find it useful!
A:
If you are working in python virtual environment, in the command window
>>qt5-tools designer
can open designer window.
A:
The Qt designer is not installed with the pip installation.
You can either download the full download from sourceforge (probably won't be the last pyqt release, and might be buggy on presence of another installation, like yours) or install it with another (unofficial) pypi package - pyqt5-tools (pip install pyqt5-tools), then run the designer from the following subpath of your python directory -
...\Python36\Lib\site-packages\pyqt5-tools\designer\designer.exe
A:
The latest PyQt5 wheels (which can be installed via pip) only contain what's necessary for running applications, and don't include the dev tools. This applies to PyQt versions 5.7 and later. For PyQt versions 5.6 and earlier, there are binary packages for Windows that also include the dev tools, and these are still available at sourceforge. The maintainer of PyQt does not plan on making any further releases of such binary packages, though - only the runtime wheels will now be made available, and there will be no official wheels for the dev tools.
In light of this, someone has created an unofficial pyqt5-tools wheel (for Windows only). This appears to be in it's early stages, though, and so may not keep up with recent PyQt5 releases. This means that it may not always be possible to install it via pip. If that is the case, as a work-around, the wheel files can be treated as zip files and the contents extracted to a suitable location. This should then allow you to run the designer.exe file that is in the pyqt5-tools/designer folder.
Finally, note that you will also see some zip and tar.gz files at sourceforge for PyQt5. These only contain the source code, though, so will be no use to you unless you intend to compile PyQt5 yourself. And just to be clear: compiling from source still would not give you all the Qt dev tools. If you go down that route, you would need to install the whole Qt development kit separately as well (which would then get you the dev tools).
A:
pip install pyqt5-tools
Then restart the cmd, just type "designer" and press enter.
A:
If you cannot see the Designer , just look into this path "Lib\site-packages\qt5_applications\Qt\bin" for designer.exe and run it.
A:
pip install pyqt5-tools
working in python 3.7.4
wont work in python 3.8.0
A:
PyQt5 works after pip install PyQt5Designer
A:
You can also install Qt Designer the following way:
Install latest Qt (I'm using 5.8) from Qt main site
Make sure you include "Qt 5.8 MinGW" component
Qt Designer will be installed in C:\Qt\5.8\mingw53_32\bin\designer.exe
Note that the executable is named "designer.exe"
A:
Download the module using pip:
pip install PyQt5Designer
Then, for anaconda users, open:
C:\ProgramData\AnacondaX\Lib\site-packages\QtDesigner\designer.exe
For python users:
64-bit:
C:\Program Files\PythonXX\Lib\site-packages\QtDesigner\designer.exe
32-bit:
C:\Program Files (x86)\PythonXX\Lib\site-packages\QtDesigner\designer.exe
A:
For anyone stumbling across this post in 2021+ and finding the answers outdated: QT Designer is now in the qt5-applications package, under Qt\bin\. On Windows this means the default path, for CPython 3.9 using the Python.org installer, is %APPDATA%\Python\Python39\site-packages\qt5_applications\Qt\bin\designer.exe.
A:
Try using:
pip install pyqt5-tools
Now you'd find the designer in site-packages/pyqt5-tools.
A:
If you are installing the pyqt5-tools then you can find the designer.exe file inside:
<python_installation>\Lib\site-packages\Qt
If you cannot locate the file or have any issues opening this directly, then open a command prompt and type:
<python_installation>\Scripts\pyqt5designer.exe
A:
For Qt Designer 6 this worked for me thanks for that protip from @Bhaskar
pip install pyqt6-tools
Then started:
qt6-tools designer
End up with nice working lightweight Qt Designer 6.0.1 version
@ pip install pyqt6-tools
Collecting pyqt6-tools
Using cached pyqt6_tools-6.1.0.3.2-py3-none-any.whl (29 kB)
Collecting pyqt6-plugins<6.1.0.3,>=6.1.0.2.2
Downloading pyqt6_plugins-6.1.0.2.2-cp39-cp39-manylinux2014_x86_64.whl (77 kB)
|████████████████████████████████| 77 kB 492 kB/s
Collecting python-dotenv
Using cached python_dotenv-0.19.2-py2.py3-none-any.whl (17 kB)
Collecting pyqt6==6.1.0
Downloading PyQt6-6.1.0-cp36.cp37.cp38.cp39-abi3-manylinux_2_28_x86_64.whl (6.8 MB)
|████████████████████████████████| 6.8 MB 1.0 MB/s
Requirement already satisfied: click in ./.pyenv/versions/3.9.6/lib/python3.9/site-packages (from pyqt6-tools) (8.0.1)
Collecting PyQt6-sip<14,>=13.1
Downloading PyQt6_sip-13.2.0-cp39-cp39-manylinux1_x86_64.whl (307 kB)
|████████████████████████████████| 307 kB 898 kB/s
Collecting PyQt6-Qt6>=6.1.0
Using cached PyQt6_Qt6-6.2.2-py3-none-manylinux_2_28_x86_64.whl (50.0 MB)
Collecting qt6-tools<6.1.0.2,>=6.1.0.1.2
Downloading qt6_tools-6.1.0.1.2-py3-none-any.whl (13 kB)
Collecting click
Downloading click-7.1.2-py2.py3-none-any.whl (82 kB)
|████████████████████████████████| 82 kB 381 kB/s
Collecting qt6-applications<6.1.0.3,>=6.1.0.2.2
Downloading qt6_applications-6.1.0.2.2-py3-none-manylinux2014_x86_64.whl (80.5 MB)
|████████████████████████████████| 80.5 MB 245 kB/s
Installing collected packages: qt6-applications, PyQt6-sip, PyQt6-Qt6, click, qt6-tools, pyqt6, python-dotenv, pyqt6-plugins, pyqt6-tools
Attempting uninstall: click
Found existing installation: click 8.0.1
Uninstalling click-8.0.1:
Successfully uninstalled click-8.0.1
Successfully installed PyQt6-Qt6-6.2.2 PyQt6-sip-13.2.0 click-7.1.2 pyqt6-6.1.0 pyqt6-plugins-6.1.0.2.2 pyqt6-tools-6.1.0.3.2 python-dotenv-0.19.2 qt6-applications-6.1.0.2.2 qt6-tools-6.1.0.1.2
A:
I was having the same problem, however I was able to install using the Pygame module installation code, changing some information:
pygame:
py -m pip install -U pygame --user
PyQt5:
py -m pip install -U pyqt5-tools --user
A:
you should find it here if your using anaconda
C:\Users\%username%\anaconda3\envs\untitled\Lib\site-packages\qt5_applications\Qt\bin
A:
By far the easiest way to do this is to use this installer:
https://build-system.fman.io/qt-designer-download
It seems as though the other answers here are now out of date, not to mention confusing for someone who is just starting out with this. Sourceforge no longer has this package, I installed the tools as suggested but nothing appeared in the scripts folder, and none of the pip commands above worked either.
A:
In a Windows' terminal, activate your virtual env where you have installed PyQt5 then just type designer.
You can create a shortcut by finding its path with where designer
| PyQt5: How to install/run Qt Designer | Feeling really stupid, right now, but the title says it all:
How do you start the QtDesigner?
I've installed PyQt5 via pip and I believe to have identified the directory it's been installed in as
C:\Users\%username%\AppData\Local\Programs\Python\Python36\Lib\site-packages\PyQt5
Now what? There are a lot of .pyd files, some .dll's, too, but nothing executable (well, except a QtWebEngineProcess.exe in ...\site-packages\PyQt5\Qt\bin, but that doesn't sound like what I'm looking for.
| [
"I struggled with this as well. The pyqt5-tools approach is cumbersome so I created a standalone installer for Qt Designer. It's only 40 MB. Maybe you will find it useful!\n",
"If you are working in python virtual environment, in the command window\n>>qt5-tools designer\n\ncan open designer window.\n",
"The Qt designer is not installed with the pip installation.\nYou can either download the full download from sourceforge (probably won't be the last pyqt release, and might be buggy on presence of another installation, like yours) or install it with another (unofficial) pypi package - pyqt5-tools (pip install pyqt5-tools), then run the designer from the following subpath of your python directory - \n...\\Python36\\Lib\\site-packages\\pyqt5-tools\\designer\\designer.exe\n\n",
"The latest PyQt5 wheels (which can be installed via pip) only contain what's necessary for running applications, and don't include the dev tools. This applies to PyQt versions 5.7 and later. For PyQt versions 5.6 and earlier, there are binary packages for Windows that also include the dev tools, and these are still available at sourceforge. The maintainer of PyQt does not plan on making any further releases of such binary packages, though - only the runtime wheels will now be made available, and there will be no official wheels for the dev tools.\nIn light of this, someone has created an unofficial pyqt5-tools wheel (for Windows only). This appears to be in it's early stages, though, and so may not keep up with recent PyQt5 releases. This means that it may not always be possible to install it via pip. If that is the case, as a work-around, the wheel files can be treated as zip files and the contents extracted to a suitable location. This should then allow you to run the designer.exe file that is in the pyqt5-tools/designer folder.\nFinally, note that you will also see some zip and tar.gz files at sourceforge for PyQt5. These only contain the source code, though, so will be no use to you unless you intend to compile PyQt5 yourself. And just to be clear: compiling from source still would not give you all the Qt dev tools. If you go down that route, you would need to install the whole Qt development kit separately as well (which would then get you the dev tools).\n",
"pip install pyqt5-tools\n\nThen restart the cmd, just type \"designer\" and press enter.\n",
"If you cannot see the Designer , just look into this path \"Lib\\site-packages\\qt5_applications\\Qt\\bin\" for designer.exe and run it.\n",
"\npip install pyqt5-tools \n\nworking in python 3.7.4\nwont work in python 3.8.0 \n",
"PyQt5 works after pip install PyQt5Designer\n",
"You can also install Qt Designer the following way:\n\nInstall latest Qt (I'm using 5.8) from Qt main site\nMake sure you include \"Qt 5.8 MinGW\" component\nQt Designer will be installed in C:\\Qt\\5.8\\mingw53_32\\bin\\designer.exe\nNote that the executable is named \"designer.exe\"\n\n",
"Download the module using pip:\npip install PyQt5Designer\n\nThen, for anaconda users, open:\nC:\\ProgramData\\AnacondaX\\Lib\\site-packages\\QtDesigner\\designer.exe\n\nFor python users:\n\n64-bit:\nC:\\Program Files\\PythonXX\\Lib\\site-packages\\QtDesigner\\designer.exe\n\n\n32-bit:\nC:\\Program Files (x86)\\PythonXX\\Lib\\site-packages\\QtDesigner\\designer.exe\n\n\n\n",
"For anyone stumbling across this post in 2021+ and finding the answers outdated: QT Designer is now in the qt5-applications package, under Qt\\bin\\. On Windows this means the default path, for CPython 3.9 using the Python.org installer, is %APPDATA%\\Python\\Python39\\site-packages\\qt5_applications\\Qt\\bin\\designer.exe.\n",
"Try using:\npip install pyqt5-tools\n\nNow you'd find the designer in site-packages/pyqt5-tools.\n",
"If you are installing the pyqt5-tools then you can find the designer.exe file inside:\n<python_installation>\\Lib\\site-packages\\Qt\n\nIf you cannot locate the file or have any issues opening this directly, then open a command prompt and type:\n<python_installation>\\Scripts\\pyqt5designer.exe\n\n",
"For Qt Designer 6 this worked for me thanks for that protip from @Bhaskar\npip install pyqt6-tools\n\nThen started:\nqt6-tools designer\n\nEnd up with nice working lightweight Qt Designer 6.0.1 version\n\n@ pip install pyqt6-tools\nCollecting pyqt6-tools\n Using cached pyqt6_tools-6.1.0.3.2-py3-none-any.whl (29 kB)\nCollecting pyqt6-plugins<6.1.0.3,>=6.1.0.2.2\n Downloading pyqt6_plugins-6.1.0.2.2-cp39-cp39-manylinux2014_x86_64.whl (77 kB)\n |████████████████████████████████| 77 kB 492 kB/s \nCollecting python-dotenv\n Using cached python_dotenv-0.19.2-py2.py3-none-any.whl (17 kB)\nCollecting pyqt6==6.1.0\n Downloading PyQt6-6.1.0-cp36.cp37.cp38.cp39-abi3-manylinux_2_28_x86_64.whl (6.8 MB)\n |████████████████████████████████| 6.8 MB 1.0 MB/s \nRequirement already satisfied: click in ./.pyenv/versions/3.9.6/lib/python3.9/site-packages (from pyqt6-tools) (8.0.1)\nCollecting PyQt6-sip<14,>=13.1\n Downloading PyQt6_sip-13.2.0-cp39-cp39-manylinux1_x86_64.whl (307 kB)\n |████████████████████████████████| 307 kB 898 kB/s \nCollecting PyQt6-Qt6>=6.1.0\n Using cached PyQt6_Qt6-6.2.2-py3-none-manylinux_2_28_x86_64.whl (50.0 MB)\nCollecting qt6-tools<6.1.0.2,>=6.1.0.1.2\n Downloading qt6_tools-6.1.0.1.2-py3-none-any.whl (13 kB)\nCollecting click\n Downloading click-7.1.2-py2.py3-none-any.whl (82 kB)\n |████████████████████████████████| 82 kB 381 kB/s \nCollecting qt6-applications<6.1.0.3,>=6.1.0.2.2\n Downloading qt6_applications-6.1.0.2.2-py3-none-manylinux2014_x86_64.whl (80.5 MB)\n |████████████████████████████████| 80.5 MB 245 kB/s \nInstalling collected packages: qt6-applications, PyQt6-sip, PyQt6-Qt6, click, qt6-tools, pyqt6, python-dotenv, pyqt6-plugins, pyqt6-tools\n Attempting uninstall: click\n Found existing installation: click 8.0.1\n Uninstalling click-8.0.1:\n Successfully uninstalled click-8.0.1\nSuccessfully installed PyQt6-Qt6-6.2.2 PyQt6-sip-13.2.0 click-7.1.2 pyqt6-6.1.0 pyqt6-plugins-6.1.0.2.2 pyqt6-tools-6.1.0.3.2 python-dotenv-0.19.2 qt6-applications-6.1.0.2.2 qt6-tools-6.1.0.1.2\n\n\n",
"I was having the same problem, however I was able to install using the Pygame module installation code, changing some information:\npygame:\npy -m pip install -U pygame --user\n\nPyQt5:\npy -m pip install -U pyqt5-tools --user\n\n",
"you should find it here if your using anaconda\nC:\\Users\\%username%\\anaconda3\\envs\\untitled\\Lib\\site-packages\\qt5_applications\\Qt\\bin\n\n",
"By far the easiest way to do this is to use this installer:\nhttps://build-system.fman.io/qt-designer-download\nIt seems as though the other answers here are now out of date, not to mention confusing for someone who is just starting out with this. Sourceforge no longer has this package, I installed the tools as suggested but nothing appeared in the scripts folder, and none of the pip commands above worked either.\n",
"In a Windows' terminal, activate your virtual env where you have installed PyQt5 then just type designer.\nYou can create a shortcut by finding its path with where designer\n"
] | [
36,
32,
22,
20,
19,
7,
5,
5,
4,
4,
4,
2,
2,
2,
0,
0,
0,
0
] | [] | [] | [
"pyqt5",
"python",
"qt",
"qt_designer"
] | stackoverflow_0042090739_pyqt5_python_qt_qt_designer.txt |
Q:
Selenium (Python) (Firefox) is unable to write Firefox profile. Permission denied (os error 13)
When I run my Python code using Selenium to open (and scrape) a website, using a profile parameter, I get the following error message:
selenium.common.exceptions.SessionNotCreatedException: Message: Failed to set preferences: Unable to write Firefox profile: Permission denied (os error 13)
I use the following code:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
from selenium.webdriver.firefox.options import Options
from datetime import datetime
f = open("/home/username/Documents/AH/log2.txt", "a")
f.write('gestart')
f.close()
# I use the below code when I use crontab, read below..
#from pyvirtualdisplay import Display
#display = Display(visible=0, size=(2880, 1800)).start()
website = "https://www.ah.nl/"
options=Options()
options.add_argument('-profile')
options.add_argument(r'/home/username/.mozilla/firefox/n2n9yy4r.default-release')
driver = webdriver.Firefox(options=options, service_log_path="/home/username/geckodriver.log")
driver.implicitly_wait(20)
driver.get(website)
tijd = WebDriverWait(driver,40).until(EC.presence_of_element_located((By.XPATH,"//*[@id='navigation-header']/div[1]/div[1]/div/div/p/a")))
tijdbezorging = tijd.text
driver.close()
driver.quit()
f = open("/home/username/Documents/AH/testlog.txt", "a")
f.write(tijdbezorging)
f.close()
The full error message when I run this code using python3 filenamecode.py:
SendAHStatus.py:46: DeprecationWarning: service_log_path has been deprecated, please pass in a Service object
driver = webdriver.Firefox(options=options, service_log_path="/home/username/geckodriver.log")
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Traceback (most recent call last):
File "SendAHStatus.py", line 46, in <module>
driver = webdriver.Firefox(options=options, service_log_path="/home/username/geckodriver.log")
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/firefox/webdriver.py", line 197, in __init__
super().__init__(command_executor=executor, options=options, keep_alive=True)
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/webdriver.py", line 288, in __init__
self.start_session(capabilities, browser_profile)
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/webdriver.py", line 381, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/errorhandler.py", line 249, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.SessionNotCreatedException: Message: Failed to set preferences: Unable to write Firefox profile: Permission denied (os error 13)
The code DOES work when I schedule it using sudo crontab -e. Yes, using sudo. See code below:
SHELL=/bin/bash
PATH=/usr/local/bin/:usr/bin: and some paths
01 01 * * * python3 myscriptname.py
And, the code also works (via my command line) without the profile parameter. I also tried using a different profile path like, /home/username/profiledir.
This also works via sudo crontab, but not via my regular command line.
Btw, I uncomment the virtual display when using cron.
I could run my code using sudo python3 myscript.py. But then I get this message:
Running Firefox as root in a regular user's session is not supported. ($XAUTHORITY is /home/username/.Xauthority which is owned by username.)
When changing ownership of XAUTHORITY to root, I can run my script, but I don't want to go down that path.
What I can derive from all of this:
The webdriver is working,
Its a permission issue since the code runs using elevated permission, using cron.
Please help me with this, its getting frustrating.
My environment:
Selenium 4.6.0
Python 3.8
Firefox 107.0
Ubuntu 20.04.5 LTS
Your help is much appreciated!
A:
This error message is telling you that your Python script is unable to write to the Firefox profile that you specified. This could be because the permissions on the profile directory are not set correctly, or because your script is running as a different user than the one who owns the Firefox profile.
One way to fix this would be to make sure that your script has the correct permissions to write to the Firefox profile directory. You can do this by changing the ownership of the profile directory to the user that your script is running as, or by changing the permissions on the directory to allow writing by other users.
Another option would be to specify the path to a different Firefox profile that your script has permission to write to, or to create a new Firefox profile and use that instead.
You may also want to update your code to handle this error more gracefully, by catching the SessionNotCreatedException and either retrying the operation or handling the error in some other way.
Here is an example of how you could modify your code to handle this error:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
from selenium.webdriver.firefox.options import Options
from datetime import datetime
from selenium.common.exceptions import SessionNotCreatedException
# Open the log file for writing
f = open("/home/username/Documents/AH/log2.txt", "a")
f.write('gestart')
f.close()
# Set the URL of the website to scrape
website = "https://www.ah.nl/"
# Create the Firefox options object
options = Options()
# Set the Firefox profile to use
options.add_argument('-profile')
options.add_argument(r'/home/username/.mozilla/firefox/n2n9yy4r.default-release')
# Try to create a new Firefox session using the specified options
try:
# Create a new Firefox webdriver
driver = webdriver.Firefox(options=options, service_log_path="/home/username/geckodriver.log")
# Set the implicit wait timeout to 20 seconds
driver.implicitly_wait(20)
# Navigate to the website
driver.get(website)
# Wait for the element with the specified XPath to be present on the page
tijd = WebDriverWait(driver,40).until(EC.presence_of_element_located((By.XPATH,"//*[@id='navigation-header']/div[1]/div[1]/div/div/p/a")))
# Get the text of the element
tijdbezorging = tijd.text
except SessionNotCreatedException:
# Handle the error by retrying the operation or logging the error
# You could also try using a different Firefox profile or creating a new one
print("Failed to create Firefox session. Retrying...")
# Retry the operation here
finally:
# Close the Firefox session
driver.close()
driver.quit()
# Open the log file for writing
f = open("/home/username/Documents/AH/testlog.txt", "a")
# Write the element text to the log file
f.write(tijdbezorging)
# Close the log file
f.close()
In this example, the code is enclosed in a try block, and the SessionNotCreatedException is caught in the except block. You can then handle the error by retrying the operation or logging the error, and then close the Firefox session and write to the log file in the finally block.
| Selenium (Python) (Firefox) is unable to write Firefox profile. Permission denied (os error 13) | When I run my Python code using Selenium to open (and scrape) a website, using a profile parameter, I get the following error message:
selenium.common.exceptions.SessionNotCreatedException: Message: Failed to set preferences: Unable to write Firefox profile: Permission denied (os error 13)
I use the following code:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
from selenium.webdriver.firefox.options import Options
from datetime import datetime
f = open("/home/username/Documents/AH/log2.txt", "a")
f.write('gestart')
f.close()
# I use the below code when I use crontab, read below..
#from pyvirtualdisplay import Display
#display = Display(visible=0, size=(2880, 1800)).start()
website = "https://www.ah.nl/"
options=Options()
options.add_argument('-profile')
options.add_argument(r'/home/username/.mozilla/firefox/n2n9yy4r.default-release')
driver = webdriver.Firefox(options=options, service_log_path="/home/username/geckodriver.log")
driver.implicitly_wait(20)
driver.get(website)
tijd = WebDriverWait(driver,40).until(EC.presence_of_element_located((By.XPATH,"//*[@id='navigation-header']/div[1]/div[1]/div/div/p/a")))
tijdbezorging = tijd.text
driver.close()
driver.quit()
f = open("/home/username/Documents/AH/testlog.txt", "a")
f.write(tijdbezorging)
f.close()
The full error message when I run this code using python3 filenamecode.py:
SendAHStatus.py:46: DeprecationWarning: service_log_path has been deprecated, please pass in a Service object
driver = webdriver.Firefox(options=options, service_log_path="/home/username/geckodriver.log")
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Traceback (most recent call last):
File "SendAHStatus.py", line 46, in <module>
driver = webdriver.Firefox(options=options, service_log_path="/home/username/geckodriver.log")
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/firefox/webdriver.py", line 197, in __init__
super().__init__(command_executor=executor, options=options, keep_alive=True)
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/webdriver.py", line 288, in __init__
self.start_session(capabilities, browser_profile)
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/webdriver.py", line 381, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/errorhandler.py", line 249, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.SessionNotCreatedException: Message: Failed to set preferences: Unable to write Firefox profile: Permission denied (os error 13)
The code DOES work when I schedule it using sudo crontab -e. Yes, using sudo. See code below:
SHELL=/bin/bash
PATH=/usr/local/bin/:usr/bin: and some paths
01 01 * * * python3 myscriptname.py
And, the code also works (via my command line) without the profile parameter. I also tried using a different profile path like, /home/username/profiledir.
This also works via sudo crontab, but not via my regular command line.
Btw, I uncomment the virtual display when using cron.
I could run my code using sudo python3 myscript.py. But then I get this message:
Running Firefox as root in a regular user's session is not supported. ($XAUTHORITY is /home/username/.Xauthority which is owned by username.)
When changing ownership of XAUTHORITY to root, I can run my script, but I don't want to go down that path.
What I can derive from all of this:
The webdriver is working,
Its a permission issue since the code runs using elevated permission, using cron.
Please help me with this, its getting frustrating.
My environment:
Selenium 4.6.0
Python 3.8
Firefox 107.0
Ubuntu 20.04.5 LTS
Your help is much appreciated!
| [
"This error message is telling you that your Python script is unable to write to the Firefox profile that you specified. This could be because the permissions on the profile directory are not set correctly, or because your script is running as a different user than the one who owns the Firefox profile.\nOne way to fix this would be to make sure that your script has the correct permissions to write to the Firefox profile directory. You can do this by changing the ownership of the profile directory to the user that your script is running as, or by changing the permissions on the directory to allow writing by other users.\nAnother option would be to specify the path to a different Firefox profile that your script has permission to write to, or to create a new Firefox profile and use that instead.\nYou may also want to update your code to handle this error more gracefully, by catching the SessionNotCreatedException and either retrying the operation or handling the error in some other way.\nHere is an example of how you could modify your code to handle this error:\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.wait import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nimport time\nfrom selenium.webdriver.firefox.options import Options\nfrom datetime import datetime\nfrom selenium.common.exceptions import SessionNotCreatedException\n\n# Open the log file for writing\nf = open(\"/home/username/Documents/AH/log2.txt\", \"a\")\nf.write('gestart')\nf.close()\n\n# Set the URL of the website to scrape\nwebsite = \"https://www.ah.nl/\"\n\n# Create the Firefox options object\noptions = Options()\n# Set the Firefox profile to use\noptions.add_argument('-profile')\noptions.add_argument(r'/home/username/.mozilla/firefox/n2n9yy4r.default-release') \n\n# Try to create a new Firefox session using the specified options\ntry:\n # Create a new Firefox webdriver\n driver = webdriver.Firefox(options=options, service_log_path=\"/home/username/geckodriver.log\")\n # Set the implicit wait timeout to 20 seconds\n driver.implicitly_wait(20)\n # Navigate to the website\n driver.get(website)\n\n # Wait for the element with the specified XPath to be present on the page\n tijd = WebDriverWait(driver,40).until(EC.presence_of_element_located((By.XPATH,\"//*[@id='navigation-header']/div[1]/div[1]/div/div/p/a\")))\n # Get the text of the element\n tijdbezorging = tijd.text\nexcept SessionNotCreatedException:\n # Handle the error by retrying the operation or logging the error\n # You could also try using a different Firefox profile or creating a new one\n print(\"Failed to create Firefox session. Retrying...\")\n # Retry the operation here\nfinally:\n # Close the Firefox session\n driver.close()\n driver.quit()\n\n # Open the log file for writing\n f = open(\"/home/username/Documents/AH/testlog.txt\", \"a\")\n # Write the element text to the log file\n f.write(tijdbezorging)\n # Close the log file\n f.close()\n\nIn this example, the code is enclosed in a try block, and the SessionNotCreatedException is caught in the except block. You can then handle the error by retrying the operation or logging the error, and then close the Firefox session and write to the log file in the finally block.\n"
] | [
0
] | [] | [] | [
"firefox",
"linux",
"python",
"selenium",
"selenium_webdriver"
] | stackoverflow_0074651572_firefox_linux_python_selenium_selenium_webdriver.txt |
Q:
AttributeErrors: undesired interaction between @property and __getattr__
I have a problem with AttributeErrors raised in a @property in combination with __getattr__() in python:
Example code:
>>> def deeply_nested_factory_fn():
... a = 2
... return a.invalid_attr
...
>>> class Test(object):
... def __getattr__(self, name):
... if name == 'abc':
... return 'abc'
... raise AttributeError("'Test' object has no attribute '%s'" % name)
... @property
... def my_prop(self):
... return deeply_nested_factory_fn()
...
>>> test = Test()
>>> test.my_prop
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in __getattr__
AttributeError: 'Test' object has no attribute 'my_prop'
In my case, this is a highly misleading error message, because it hides the fact that deeply_nested_factory_fn() has a mistake.
Based on the idea in Tadhg McDonald-Jensen's answer, my currently best solution is the following. Any hints on how to get rid of the __main__. prefix to AttributeError and the reference to attributeErrorCatcher in the traceback would be much appreciated.
>>> def catchAttributeErrors(func):
... AttributeError_org = AttributeError
... def attributeErrorCatcher(*args, **kwargs):
... try:
... return func(*args, **kwargs)
... except AttributeError_org as e:
... import sys
... class AttributeError(Exception):
... pass
... etype, value, tb = sys.exc_info()
... raise AttributeError(e).with_traceback(tb.tb_next) from None
... return attributeErrorCatcher
...
>>> def deeply_nested_factory_fn():
... a = 2
... return a.invalid_attr
...
>>> class Test(object):
... def __getattr__(self, name):
... if name == 'abc':
... # computing come other attributes
... return 'abc'
... raise AttributeError("'Test' object has no attribute '%s'" % name)
... @property
... @catchAttributeErrors
... def my_prop(self):
... return deeply_nested_factory_fn()
...
>>> class Test1(object):
... def __init__(self):
... test = Test()
... test.my_prop
...
>>> test1 = Test1()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in __init__
File "<stdin>", line 11, in attributeErrorCatcher
File "<stdin>", line 10, in my_prop
File "<stdin>", line 3, in deeply_nested_factory_fn
__main__.AttributeError: 'int' object has no attribute 'invalid_attr'
A:
If you're willing to exclusively use new-style classes, you could overload __getattribute__ instead of __getattr__:
class Test(object):
def __getattribute__(self, name):
if name == 'abc':
return 'abc'
else:
return object.__getattribute__(self, name)
@property
def my_prop(self):
return deeply_nested_factory_fn()
Now your stack trace will properly mention deeply_nested_factory_fn.
Traceback (most recent call last):
File "C:\python\myprogram.py", line 16, in <module>
test.my_prop
File "C:\python\myprogram.py", line 10, in __getattribute__
return object.__getattribute__(self, name)
File "C:\python\myprogram.py", line 13, in my_prop
return deeply_nested_factory_fn()
File "C:\python\myprogram.py", line 3, in deeply_nested_factory_fn
return a.invalid_attr
AttributeError: 'int' object has no attribute 'invalid_attr'
A:
You can create a custom Exception that appears to be an AttributeError but will not trigger __getattr__ since it is not actually an AttributeError.
UPDATED: the traceback message is greatly improved by reassigning the .__traceback__ attribute before re-raising the error:
class AttributeError_alt(Exception):
@classmethod
def wrapper(err_type, f):
"""wraps a function to reraise an AttributeError as the alternate type"""
@functools.wraps(f)
def alt_AttrError_wrapper(*args,**kw):
try:
return f(*args,**kw)
except AttributeError as e:
new_err = err_type(e)
new_err.__traceback__ = e.__traceback__.tb_next
raise new_err from None
return alt_AttrError_wrapper
Then when you define your property as:
@property
@AttributeError_alt.wrapper
def my_prop(self):
return deeply_nested_factory_fn()
and the error message you will get will look like this:
Traceback (most recent call last):
File ".../test.py", line 34, in <module>
test.my_prop
File ".../test.py", line 14, in alt_AttrError_wrapper
raise new_err from None
File ".../test.py", line 30, in my_prop
return deeply_nested_factory_fn()
File ".../test.py", line 20, in deeply_nested_factory_fn
return a.invalid_attr
AttributeError_alt: 'int' object has no attribute 'invalid_attr'
notice there is a line for raise new_err from None but it is above the lines from within the property call. There would also be a line for return f(*args,**kw) but that is omitted with .tb_next.
I am fairly sure the best solution to your problem has already been suggested and you can see the previous revision of my answer for why I think it is the best option. Although honestly if there is an error that is incorrectly being suppressed then raise a bloody RuntimeError chained to the one that would be hidden otherwise:
def assert_no_AttributeError(f):
@functools.wraps(f)
def assert_no_AttrError_wrapper(*args,**kw):
try:
return f(*args,**kw)
except AttributeError as e:
e.__traceback__ = e.__traceback__.tb_next
raise RuntimeError("AttributeError was incorrectly raised") from e
return assert_no_AttrError_wrapper
then if you decorate your property with this you will get an error like this:
Traceback (most recent call last):
File ".../test.py", line 27, in my_prop
return deeply_nested_factory_fn()
File ".../test.py", line 17, in deeply_nested_factory_fn
return a.invalid_attr
AttributeError: 'int' object has no attribute 'invalid_attr'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".../test.py", line 32, in <module>
x.my_prop
File ".../test.py", line 11, in assert_no_AttrError_wrapper
raise RuntimeError("AttributeError was incorrectly raised") from e
RuntimeError: AttributeError was incorrectly raised
Although if you expect more then just one thing to raise an AttributeError then you might want to just overload __getattribute__ to check for any peculiar error for all lookups:
def __getattribute__(self,attr):
try:
return object.__getattribute__(self,attr)
except AttributeError as e:
if str(e) == "{0.__class__.__name__!r} object has no attribute {1!r}".format(self,attr):
raise #normal case of "attribute not found"
else: #if the error message was anything else then it *causes* a RuntimeError
raise RuntimeError("Unexpected AttributeError") from e
This way when something goes wrong that you are not expecting you will know it right away!
A:
Just in case others find this: the problem with the example on top is that an AttributeError is raised inside __getattr__. Instead, one should call self.__getattribute__(attr) to let that raise.
Example
def deeply_nested_factory_fn():
a = 2
return a.invalid_attr
class Test(object):
def __getattr__(self, name):
if name == 'abc':
return 'abc'
return self.__getattribute__(name)
@property
def my_prop(self):
return deeply_nested_factory_fn()
test = Test()
test.my_prop
This yields
AttributeError Traceback (most recent call last)
Cell In [1], line 15
12 return deeply_nested_factory_fn()
14 test = Test()
---> 15 test.my_prop
Cell In [1], line 9, in Test.__getattr__(self, name)
7 if name == 'abc':
8 return 'abc'
----> 9 return self.__getattribute__(name)
Cell In [1], line 12, in Test.my_prop(self)
10 @property
11 def my_prop(self):
---> 12 return deeply_nested_factory_fn()
Cell In [1], line 3, in deeply_nested_factory_fn()
1 def deeply_nested_factory_fn():
2 a = 2
----> 3 return a.invalid_attr
AttributeError: 'int' object has no attribute 'invalid_attr'
| AttributeErrors: undesired interaction between @property and __getattr__ | I have a problem with AttributeErrors raised in a @property in combination with __getattr__() in python:
Example code:
>>> def deeply_nested_factory_fn():
... a = 2
... return a.invalid_attr
...
>>> class Test(object):
... def __getattr__(self, name):
... if name == 'abc':
... return 'abc'
... raise AttributeError("'Test' object has no attribute '%s'" % name)
... @property
... def my_prop(self):
... return deeply_nested_factory_fn()
...
>>> test = Test()
>>> test.my_prop
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in __getattr__
AttributeError: 'Test' object has no attribute 'my_prop'
In my case, this is a highly misleading error message, because it hides the fact that deeply_nested_factory_fn() has a mistake.
Based on the idea in Tadhg McDonald-Jensen's answer, my currently best solution is the following. Any hints on how to get rid of the __main__. prefix to AttributeError and the reference to attributeErrorCatcher in the traceback would be much appreciated.
>>> def catchAttributeErrors(func):
... AttributeError_org = AttributeError
... def attributeErrorCatcher(*args, **kwargs):
... try:
... return func(*args, **kwargs)
... except AttributeError_org as e:
... import sys
... class AttributeError(Exception):
... pass
... etype, value, tb = sys.exc_info()
... raise AttributeError(e).with_traceback(tb.tb_next) from None
... return attributeErrorCatcher
...
>>> def deeply_nested_factory_fn():
... a = 2
... return a.invalid_attr
...
>>> class Test(object):
... def __getattr__(self, name):
... if name == 'abc':
... # computing come other attributes
... return 'abc'
... raise AttributeError("'Test' object has no attribute '%s'" % name)
... @property
... @catchAttributeErrors
... def my_prop(self):
... return deeply_nested_factory_fn()
...
>>> class Test1(object):
... def __init__(self):
... test = Test()
... test.my_prop
...
>>> test1 = Test1()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in __init__
File "<stdin>", line 11, in attributeErrorCatcher
File "<stdin>", line 10, in my_prop
File "<stdin>", line 3, in deeply_nested_factory_fn
__main__.AttributeError: 'int' object has no attribute 'invalid_attr'
| [
"If you're willing to exclusively use new-style classes, you could overload __getattribute__ instead of __getattr__:\nclass Test(object):\n def __getattribute__(self, name):\n if name == 'abc':\n return 'abc'\n else:\n return object.__getattribute__(self, name)\n @property\n def my_prop(self):\n return deeply_nested_factory_fn()\n\nNow your stack trace will properly mention deeply_nested_factory_fn.\nTraceback (most recent call last):\n File \"C:\\python\\myprogram.py\", line 16, in <module>\n test.my_prop\n File \"C:\\python\\myprogram.py\", line 10, in __getattribute__\n return object.__getattribute__(self, name)\n File \"C:\\python\\myprogram.py\", line 13, in my_prop\n return deeply_nested_factory_fn()\n File \"C:\\python\\myprogram.py\", line 3, in deeply_nested_factory_fn\n return a.invalid_attr\nAttributeError: 'int' object has no attribute 'invalid_attr'\n\n",
"You can create a custom Exception that appears to be an AttributeError but will not trigger __getattr__ since it is not actually an AttributeError.\nUPDATED: the traceback message is greatly improved by reassigning the .__traceback__ attribute before re-raising the error:\nclass AttributeError_alt(Exception):\n @classmethod\n def wrapper(err_type, f):\n \"\"\"wraps a function to reraise an AttributeError as the alternate type\"\"\"\n @functools.wraps(f)\n def alt_AttrError_wrapper(*args,**kw):\n try:\n return f(*args,**kw)\n except AttributeError as e:\n new_err = err_type(e)\n new_err.__traceback__ = e.__traceback__.tb_next\n raise new_err from None\n return alt_AttrError_wrapper\n\nThen when you define your property as:\n@property\n@AttributeError_alt.wrapper\ndef my_prop(self):\n return deeply_nested_factory_fn()\n\nand the error message you will get will look like this:\nTraceback (most recent call last):\n File \".../test.py\", line 34, in <module>\n test.my_prop\n File \".../test.py\", line 14, in alt_AttrError_wrapper\n raise new_err from None\n File \".../test.py\", line 30, in my_prop\n return deeply_nested_factory_fn()\n File \".../test.py\", line 20, in deeply_nested_factory_fn\n return a.invalid_attr\nAttributeError_alt: 'int' object has no attribute 'invalid_attr'\n\nnotice there is a line for raise new_err from None but it is above the lines from within the property call. There would also be a line for return f(*args,**kw) but that is omitted with .tb_next.\n\nI am fairly sure the best solution to your problem has already been suggested and you can see the previous revision of my answer for why I think it is the best option. Although honestly if there is an error that is incorrectly being suppressed then raise a bloody RuntimeError chained to the one that would be hidden otherwise:\ndef assert_no_AttributeError(f):\n @functools.wraps(f)\n def assert_no_AttrError_wrapper(*args,**kw):\n try:\n return f(*args,**kw)\n except AttributeError as e:\n e.__traceback__ = e.__traceback__.tb_next\n raise RuntimeError(\"AttributeError was incorrectly raised\") from e\n return assert_no_AttrError_wrapper\n\nthen if you decorate your property with this you will get an error like this:\nTraceback (most recent call last):\n File \".../test.py\", line 27, in my_prop\n return deeply_nested_factory_fn()\n File \".../test.py\", line 17, in deeply_nested_factory_fn\n return a.invalid_attr\nAttributeError: 'int' object has no attribute 'invalid_attr'\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \".../test.py\", line 32, in <module>\n x.my_prop\n File \".../test.py\", line 11, in assert_no_AttrError_wrapper\n raise RuntimeError(\"AttributeError was incorrectly raised\") from e\nRuntimeError: AttributeError was incorrectly raised\n\nAlthough if you expect more then just one thing to raise an AttributeError then you might want to just overload __getattribute__ to check for any peculiar error for all lookups:\ndef __getattribute__(self,attr):\n try:\n return object.__getattribute__(self,attr)\n except AttributeError as e:\n if str(e) == \"{0.__class__.__name__!r} object has no attribute {1!r}\".format(self,attr):\n raise #normal case of \"attribute not found\"\n else: #if the error message was anything else then it *causes* a RuntimeError\n raise RuntimeError(\"Unexpected AttributeError\") from e\n\nThis way when something goes wrong that you are not expecting you will know it right away!\n",
"Just in case others find this: the problem with the example on top is that an AttributeError is raised inside __getattr__. Instead, one should call self.__getattribute__(attr) to let that raise.\nExample\ndef deeply_nested_factory_fn():\n a = 2\n return a.invalid_attr\n\nclass Test(object):\n def __getattr__(self, name):\n if name == 'abc':\n return 'abc'\n return self.__getattribute__(name)\n @property\n def my_prop(self):\n return deeply_nested_factory_fn()\n\ntest = Test()\ntest.my_prop\n\nThis yields\nAttributeError Traceback (most recent call last)\nCell In [1], line 15\n 12 return deeply_nested_factory_fn()\n 14 test = Test()\n---> 15 test.my_prop\n\nCell In [1], line 9, in Test.__getattr__(self, name)\n 7 if name == 'abc':\n 8 return 'abc'\n----> 9 return self.__getattribute__(name)\n\nCell In [1], line 12, in Test.my_prop(self)\n 10 @property\n 11 def my_prop(self):\n---> 12 return deeply_nested_factory_fn()\n\nCell In [1], line 3, in deeply_nested_factory_fn()\n 1 def deeply_nested_factory_fn():\n 2 a = 2\n----> 3 return a.invalid_attr\n\nAttributeError: 'int' object has no attribute 'invalid_attr'\n\n"
] | [
3,
1,
0
] | [] | [] | [
"attributeerror",
"getattr",
"properties",
"python"
] | stackoverflow_0036575068_attributeerror_getattr_properties_python.txt |
Q:
Doing operations inside millions of csv files with python script
csv format
screenshot of my csv
So, as shown in the attached file (one among the millions of csv files I have), they are in one line with 13890 columns. I had to convert them into two columns only with several lines instead the actual form in order to have the date and its parameter only. The applied operations to convert each file can be done but when I have to convert the 3 to 4 millions of files, even though the script is working well, the execution is taking forever and approximately will finish after 2 years. I need to convert all the files ASAP how to do it?
If someone could propose a fast script to convert or just a method to speed up th script it would be so nice. Thanks in advance.
I even though about using pyarow also splitting csv files in different folders but nothing has changed the converting speed still too slow (the machine I am using is a 2TB SSD with 16 gb RAM).
I have also tried the script in another pc with less performance the speed of the script execution still same!!!
Here is my script for those who asked it.
import os, sys, csv
op = 1
df_list_old = []
path_old_files = "D:/Pycharm/"
for file in os.listdir(path_old_files):
df_list_old.append(file)
for file in df_list_old:
fileExist = os.path.exists(file)
if fileExist is True:
print("File number " already converted")
op = op + 1
else:
print("File number ",op," name ",file)
with open(path_old_files+file, 'r') as file_to_convert:
reader = csv.reader(file_to_convert)
for item in reader:
for i in item:
if item.index(i) == 5:
print("Start of conversion operation")
data = int((item[5])[53:61]), float((item[5])[64:])
elif (item.index(i) > 5) and (item.index(i) < len(item)-14):
data = int((item[item.index(i)])[2:10]), float((item[item.index(i)])[13:])
elif item.index(i) == len(item)-14:
data = int((item[len(item)-14])[2:10]), float((item[len(item)-14])[13:17])
print("End of conversion operation")
with open(file, 'a', newline='') as csv_file:
if item.index(i) >= 5:
writer = csv.writer(csv_file)
writer.writerow(data)
op = op + 1
A:
So, there are a couple of ways to parse the file better:
First up you can use enumerate when you want access to the index as well as the value in an array.
Second there's no need to check every index because you already know which ones contain the data you need.
And for optimisation the biggest mistake was opening and closing the write_file for every line you wanted to write. Imagine writing a book but after every line you closed it just to open it again.
with open(path_old_files+file, 'r') as file_to_convert:
# create an array, we will store the lines in here
data = []
reader = csv.reader(file_to_convert)
for item in reader:
# rather than checking each line, only check the lines we are going to use [start_index, end_index]
# use enumerate to get the index as well as the value
for index, value in enumerate(item[5: -14]):
if index == 0:
print("Start of conversion operation")
# add the line to the data array
data.append((int(value[53:61]), float(value[64:])))
elif index == len(item)-14:
# add the line to the data array
data.append((int(item[index][2:10]), float(item[index][13:17])))
print("End of conversion operation")
else:
# add the line to the data array
data.append((int(item[index][2:10]), float(item[index][13:])))
# because we have made an array of the lines we only have to open the file to write once, before you were
# opening and closing the file for each line, this was probably the biggest inefficiency
with open(file, 'a', newline='') as csv_file:
for line in data:
writer = csv.writer(csv_file)
writer.writerow(line)
I'm not entirely sure how you want the data to be outputted so mess around with the data.append. I have put the values in as tuple.
| Doing operations inside millions of csv files with python script | csv format
screenshot of my csv
So, as shown in the attached file (one among the millions of csv files I have), they are in one line with 13890 columns. I had to convert them into two columns only with several lines instead the actual form in order to have the date and its parameter only. The applied operations to convert each file can be done but when I have to convert the 3 to 4 millions of files, even though the script is working well, the execution is taking forever and approximately will finish after 2 years. I need to convert all the files ASAP how to do it?
If someone could propose a fast script to convert or just a method to speed up th script it would be so nice. Thanks in advance.
I even though about using pyarow also splitting csv files in different folders but nothing has changed the converting speed still too slow (the machine I am using is a 2TB SSD with 16 gb RAM).
I have also tried the script in another pc with less performance the speed of the script execution still same!!!
Here is my script for those who asked it.
import os, sys, csv
op = 1
df_list_old = []
path_old_files = "D:/Pycharm/"
for file in os.listdir(path_old_files):
df_list_old.append(file)
for file in df_list_old:
fileExist = os.path.exists(file)
if fileExist is True:
print("File number " already converted")
op = op + 1
else:
print("File number ",op," name ",file)
with open(path_old_files+file, 'r') as file_to_convert:
reader = csv.reader(file_to_convert)
for item in reader:
for i in item:
if item.index(i) == 5:
print("Start of conversion operation")
data = int((item[5])[53:61]), float((item[5])[64:])
elif (item.index(i) > 5) and (item.index(i) < len(item)-14):
data = int((item[item.index(i)])[2:10]), float((item[item.index(i)])[13:])
elif item.index(i) == len(item)-14:
data = int((item[len(item)-14])[2:10]), float((item[len(item)-14])[13:17])
print("End of conversion operation")
with open(file, 'a', newline='') as csv_file:
if item.index(i) >= 5:
writer = csv.writer(csv_file)
writer.writerow(data)
op = op + 1
| [
"So, there are a couple of ways to parse the file better:\nFirst up you can use enumerate when you want access to the index as well as the value in an array.\nSecond there's no need to check every index because you already know which ones contain the data you need.\nAnd for optimisation the biggest mistake was opening and closing the write_file for every line you wanted to write. Imagine writing a book but after every line you closed it just to open it again.\n with open(path_old_files+file, 'r') as file_to_convert:\n # create an array, we will store the lines in here\n data = []\n reader = csv.reader(file_to_convert)\n for item in reader:\n # rather than checking each line, only check the lines we are going to use [start_index, end_index]\n # use enumerate to get the index as well as the value\n for index, value in enumerate(item[5: -14]):\n if index == 0:\n print(\"Start of conversion operation\")\n # add the line to the data array\n data.append((int(value[53:61]), float(value[64:])))\n elif index == len(item)-14:\n # add the line to the data array\n data.append((int(item[index][2:10]), float(item[index][13:17])))\n print(\"End of conversion operation\")\n else:\n # add the line to the data array\n data.append((int(item[index][2:10]), float(item[index][13:])))\n # because we have made an array of the lines we only have to open the file to write once, before you were\n # opening and closing the file for each line, this was probably the biggest inefficiency\n with open(file, 'a', newline='') as csv_file:\n for line in data:\n writer = csv.writer(csv_file)\n writer.writerow(line)\n\nI'm not entirely sure how you want the data to be outputted so mess around with the data.append. I have put the values in as tuple.\n"
] | [
1
] | [] | [] | [
"csv",
"database",
"pyarrow",
"pycharm",
"python"
] | stackoverflow_0074678238_csv_database_pyarrow_pycharm_python.txt |
Q:
Outer minimum vectorization in numpy
Given an NxM matrix A, I want to efficiently obtain the NxMxN tensor whose ith layer is the application of np.minimum between A and the ith row of A. Using a for loop:
> A = np.array([[1, 2], [3, 4], [5,6]])
> output = np.zeros(shape=(A.shape[0], A.shape[1], A.shape[0]))
> for i in range(a.shape[0]):
output[:, :, i] = np.minimum(A, A[i])
> output
array([[[1., 1., 1.],
[2., 2., 2.]],
[[1., 3., 3.],
[2., 4., 4.]],
[[1., 3., 5.],
[2., 4., 6.]]])
This is very slow so I would like to get rid of the for loop and vectorize it. I feel like there should be a general method that works with any function of a matrix and a vector not just, minimum. Using np.minimum.outer does not work as it outputs an order 4 tensor.
A:
With broadcasting we can make a (3,3,2) result:
In [153]: np.minimum(A[:,None,:],A[None,:,:])
Out[153]:
array([[[1, 2],
[1, 2],
[1, 2]],
[[1, 2],
[3, 4],
[3, 4]],
[[1, 2],
[3, 4],
[5, 6]]])
and then switch the last 2 dimensions to get the (3,2,3) you want:
In [154]: np.minimum(A[:,None,:],A[None,:,:]).transpose(0,2,1)
Out[154]:
array([[[1, 1, 1],
[2, 2, 2]],
[[1, 3, 3],
[2, 4, 4]],
[[1, 3, 5],
[2, 4, 6]]])
Or do the transpose first
In [155]: np.minimum(A[:,:,None],A.T[None,:,:]) # (3,2,1) (1,2,3)=>(3,2,3)
Out[155]:
array([[[1, 1, 1],
[2, 2, 2]],
[[1, 3, 3],
[2, 4, 4]],
[[1, 3, 5],
[2, 4, 6]]])
edit
Adding the sum is trivial:
In [157]: np.minimum(A[:,:,None],A.T[None,:,:]).sum(axis=1)
Out[157]:
array([[ 3, 3, 3],
[ 3, 7, 7],
[ 3, 7, 11]])
A:
To efficiently obtain the tensor you described without using a for loop, you can use the np.einsum function. This function allows you to specify a summation operation using a simple string notation, which can be much faster than using a for loop.
import numpy as np
A = np.array([[1, 2], [3, 4], [5,6]])
output = np.einsum('ij,ik->ijk', A, A)
print(output)
This code will produce the same output as your original code, but without using a for loop.
The key to using np.einsum in this case is the string 'ij,ik->ijk'. This string specifies the summation operation that np.einsum will perform. The first two letters 'ij' specify the indices of the first operand (in this case, A), and the second two letters 'ik' specify the indices of the second operand (also A). Finally, the last three letters 'ijk' specify the indices of the output tensor.
In this case, the np.einsum function will compute the tensor by applying the minimum operator element-wise to the two operands. This is equivalent to your original code, but it is much faster because it uses a vectorized operation instead of a for loop.
| Outer minimum vectorization in numpy | Given an NxM matrix A, I want to efficiently obtain the NxMxN tensor whose ith layer is the application of np.minimum between A and the ith row of A. Using a for loop:
> A = np.array([[1, 2], [3, 4], [5,6]])
> output = np.zeros(shape=(A.shape[0], A.shape[1], A.shape[0]))
> for i in range(a.shape[0]):
output[:, :, i] = np.minimum(A, A[i])
> output
array([[[1., 1., 1.],
[2., 2., 2.]],
[[1., 3., 3.],
[2., 4., 4.]],
[[1., 3., 5.],
[2., 4., 6.]]])
This is very slow so I would like to get rid of the for loop and vectorize it. I feel like there should be a general method that works with any function of a matrix and a vector not just, minimum. Using np.minimum.outer does not work as it outputs an order 4 tensor.
| [
"With broadcasting we can make a (3,3,2) result:\nIn [153]: np.minimum(A[:,None,:],A[None,:,:])\nOut[153]: \narray([[[1, 2],\n [1, 2],\n [1, 2]],\n\n [[1, 2],\n [3, 4],\n [3, 4]],\n\n [[1, 2],\n [3, 4],\n [5, 6]]])\n\nand then switch the last 2 dimensions to get the (3,2,3) you want:\nIn [154]: np.minimum(A[:,None,:],A[None,:,:]).transpose(0,2,1)\nOut[154]: \narray([[[1, 1, 1],\n [2, 2, 2]],\n\n [[1, 3, 3],\n [2, 4, 4]],\n\n [[1, 3, 5],\n [2, 4, 6]]])\n\nOr do the transpose first\nIn [155]: np.minimum(A[:,:,None],A.T[None,:,:]) # (3,2,1) (1,2,3)=>(3,2,3)\nOut[155]: \narray([[[1, 1, 1],\n [2, 2, 2]],\n\n [[1, 3, 3],\n [2, 4, 4]],\n\n [[1, 3, 5],\n [2, 4, 6]]])\n\nedit\nAdding the sum is trivial:\nIn [157]: np.minimum(A[:,:,None],A.T[None,:,:]).sum(axis=1)\nOut[157]: \narray([[ 3, 3, 3],\n [ 3, 7, 7],\n [ 3, 7, 11]])\n\n",
"To efficiently obtain the tensor you described without using a for loop, you can use the np.einsum function. This function allows you to specify a summation operation using a simple string notation, which can be much faster than using a for loop.\nimport numpy as np\n\n\nA = np.array([[1, 2], [3, 4], [5,6]])\n\n\noutput = np.einsum('ij,ik->ijk', A, A)\n\n\nprint(output)\n\nThis code will produce the same output as your original code, but without using a for loop.\nThe key to using np.einsum in this case is the string 'ij,ik->ijk'. This string specifies the summation operation that np.einsum will perform. The first two letters 'ij' specify the indices of the first operand (in this case, A), and the second two letters 'ik' specify the indices of the second operand (also A). Finally, the last three letters 'ijk' specify the indices of the output tensor.\nIn this case, the np.einsum function will compute the tensor by applying the minimum operator element-wise to the two operands. This is equivalent to your original code, but it is much faster because it uses a vectorized operation instead of a for loop.\n"
] | [
1,
0
] | [] | [] | [
"numpy",
"python",
"vectorization"
] | stackoverflow_0074678481_numpy_python_vectorization.txt |
Q:
Python Threading vs Multiprocessing to improve REST API responsiveness "fire and forget" tasks
I am somewhat new to both threading and multiprocessing in Python, as well as dealing with the concept of the GIL. I have a situation where I have time consuming fire and forget tasks that I need the server to run, but the server should immediately reply to the client and basically be like "okay, your thing was submitted" so that the client does not hang waiting for the thing to complete. An example of what one of the "things" might do is pull down some data from a database or two, compare that data, and then write the result to another database. The databases are remote, not locally on the same host as the server itself. Another example, is crunching some data and then sending a text as a result of that. The client does not care about the data, but someone will receive a text later with some information that is the result of the data crunching from the various dictionaries and database entries. However, there could be many such requests pouring in from many clients. The goal here is to spawn a thread, or process that essentially kills itself because we don't care at all about returning any data from it.
At a glance, my understanding is that both multiprocessing and threading can achieve similar results for this use case. My main concerns are that I can immediately launch the function to go do its own thing and return to the client quickly so it does not hang. There are many, many requests coming in simultaneously from many, many clients in this scenario. As a result, my understanding is that multiprocessing may be better, so that these tasks would not need to be executed as sequential threads because of the GIL. However, I am unsure of how to make the processes end themselves when they are done with their task rather than needing to wait for them.
An example of the problem
@route('/api/example', methods=["POST"])
def example_request(self, request):
request_data = request.get_json()
crunch_data_and_send_text(request_data) # Takes maybe 5-10 seconds, doesn't return data
return # Return to client. Would like to return to client immediately rather than waiting
Would threading or multiprocessing be better here? And how can I make the process (or thread) .join() itself effectively when it is done rather than needing to join it before I can return to the client.
I have also considered asyncio which I think would allow something that would also improve this, but the existing codebase I have inherited is so large that it is infeasible to rewrite in async for the time being, and library replacements may need to be found in that case, so it is not an option.
#Threading
from threading import Thread
@route('/api/example', methods=["POST"])
def example_request(self, request):
request_data = request.get_json()
fire_and_forget = Thread(target = crunch_data_and_send_text, args=(request_data,))
fire_and_forget.start()
return # Return to client. Would like to return to client immediately rather than waiting
# Multiprocessing
from multiprocessing import Process
@route('/api/example', methods=["POST"])
def example_request(self, request):
request_data = request.get_json()
fire_and_forget = Process(target = crunch_data_and_send_text, args=(request_data,))
fire_and_forget.start()
return # Return to client. Would like to return to client immediately rather than waiting
Which of these is better for this use case? Is there a way I can have them .join() themselves automatically when they finish rather than needing to actually sit here in the function and wait for them to complete before returning to the client?
To be clear, asyncio is unfortunately NOT an option for me.
A:
I suggest using Advance Python Scheduler.
Instead of running your function in a thread, schedule it to run and immediately return to client.
After setting up your flask app, setup Flask-APScheduler and then schedule your function to run in the background.
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler({
--- setup the scheduler ---
})
@route('/api/example', methods=["POST"])
def example_request(self, request):
request_data = request.get_json()
job = scheduler.add_job(crunch_data_and_send_text, 'date', run_date=datetime.utcnow())
return "The request is being processed ..."
to pass arguments to crunch_data_and_send_text you can do:
lambda: crunch_data_and_send_text(request_data)
here is the User Guide
| Python Threading vs Multiprocessing to improve REST API responsiveness "fire and forget" tasks | I am somewhat new to both threading and multiprocessing in Python, as well as dealing with the concept of the GIL. I have a situation where I have time consuming fire and forget tasks that I need the server to run, but the server should immediately reply to the client and basically be like "okay, your thing was submitted" so that the client does not hang waiting for the thing to complete. An example of what one of the "things" might do is pull down some data from a database or two, compare that data, and then write the result to another database. The databases are remote, not locally on the same host as the server itself. Another example, is crunching some data and then sending a text as a result of that. The client does not care about the data, but someone will receive a text later with some information that is the result of the data crunching from the various dictionaries and database entries. However, there could be many such requests pouring in from many clients. The goal here is to spawn a thread, or process that essentially kills itself because we don't care at all about returning any data from it.
At a glance, my understanding is that both multiprocessing and threading can achieve similar results for this use case. My main concerns are that I can immediately launch the function to go do its own thing and return to the client quickly so it does not hang. There are many, many requests coming in simultaneously from many, many clients in this scenario. As a result, my understanding is that multiprocessing may be better, so that these tasks would not need to be executed as sequential threads because of the GIL. However, I am unsure of how to make the processes end themselves when they are done with their task rather than needing to wait for them.
An example of the problem
@route('/api/example', methods=["POST"])
def example_request(self, request):
request_data = request.get_json()
crunch_data_and_send_text(request_data) # Takes maybe 5-10 seconds, doesn't return data
return # Return to client. Would like to return to client immediately rather than waiting
Would threading or multiprocessing be better here? And how can I make the process (or thread) .join() itself effectively when it is done rather than needing to join it before I can return to the client.
I have also considered asyncio which I think would allow something that would also improve this, but the existing codebase I have inherited is so large that it is infeasible to rewrite in async for the time being, and library replacements may need to be found in that case, so it is not an option.
#Threading
from threading import Thread
@route('/api/example', methods=["POST"])
def example_request(self, request):
request_data = request.get_json()
fire_and_forget = Thread(target = crunch_data_and_send_text, args=(request_data,))
fire_and_forget.start()
return # Return to client. Would like to return to client immediately rather than waiting
# Multiprocessing
from multiprocessing import Process
@route('/api/example', methods=["POST"])
def example_request(self, request):
request_data = request.get_json()
fire_and_forget = Process(target = crunch_data_and_send_text, args=(request_data,))
fire_and_forget.start()
return # Return to client. Would like to return to client immediately rather than waiting
Which of these is better for this use case? Is there a way I can have them .join() themselves automatically when they finish rather than needing to actually sit here in the function and wait for them to complete before returning to the client?
To be clear, asyncio is unfortunately NOT an option for me.
| [
"I suggest using Advance Python Scheduler.\nInstead of running your function in a thread, schedule it to run and immediately return to client.\nAfter setting up your flask app, setup Flask-APScheduler and then schedule your function to run in the background.\nfrom apscheduler.schedulers.background import BackgroundScheduler\nscheduler = BackgroundScheduler({\n --- setup the scheduler ---\n })\n\n@route('/api/example', methods=[\"POST\"])\ndef example_request(self, request):\n request_data = request.get_json()\n job = scheduler.add_job(crunch_data_and_send_text, 'date', run_date=datetime.utcnow())\n return \"The request is being processed ...\"\n\nto pass arguments to crunch_data_and_send_text you can do:\nlambda: crunch_data_and_send_text(request_data)\n\nhere is the User Guide\n"
] | [
1
] | [] | [] | [
"flask",
"multiprocessing",
"multithreading",
"optimization",
"python"
] | stackoverflow_0074663169_flask_multiprocessing_multithreading_optimization_python.txt |
Q:
how to tell if a button hasnt been pressed
button = Button(style = discord.ButtonStyle.green, emoji = ":arrow_backward:", custom_id = "button1")
view = View()
view.add_item(button)
async def button_callback(interaction):
await message.edit(content="**response 1**")
button.callback = button_callback
await message.edit(content="⠀⠀:watermelon:⠀⠀⠀⠀⠀:watermelon:⠀⠀⠀⠀⠀:watermelon:", view=view)
i want to be able to check if the user has pressed the button, and then do something if the button hasnt been pressed after a certain amount of time. how can i check if the button hasnt been pressed?
A:
Try this solution
# Set a timer to check if the button has not been pressed
import asyncio
time_limit = 15 # Set the amount of time to wait before checking
async def check_button():
await asyncio.sleep(time_limit) # Wait the designated amount of time
if not button.pressed: # Check if the button has not been pressed
await message.edit(content="**response 2**") # Do something if the button has not been pressed
# Start the timer
asyncio.create_task(check_button())
| how to tell if a button hasnt been pressed | button = Button(style = discord.ButtonStyle.green, emoji = ":arrow_backward:", custom_id = "button1")
view = View()
view.add_item(button)
async def button_callback(interaction):
await message.edit(content="**response 1**")
button.callback = button_callback
await message.edit(content="⠀⠀:watermelon:⠀⠀⠀⠀⠀:watermelon:⠀⠀⠀⠀⠀:watermelon:", view=view)
i want to be able to check if the user has pressed the button, and then do something if the button hasnt been pressed after a certain amount of time. how can i check if the button hasnt been pressed?
| [
"Try this solution\n\n# Set a timer to check if the button has not been pressed\nimport asyncio\n\ntime_limit = 15 # Set the amount of time to wait before checking\n\nasync def check_button():\n await asyncio.sleep(time_limit) # Wait the designated amount of time\n if not button.pressed: # Check if the button has not been pressed\n await message.edit(content=\"**response 2**\") # Do something if the button has not been pressed\n\n# Start the timer\nasyncio.create_task(check_button())\n\n"
] | [
0
] | [] | [] | [
"discord",
"pycord",
"python"
] | stackoverflow_0074678952_discord_pycord_python.txt |
Q:
Turn values from string to integers in JSON file python
I'm trying to change values in a JSON file from strings to integers, my issue is that the keys are row numbers so I can't call by key name (as they will change consistently). The values that need changing are within the "sharesTraded" object. Below is my JSON file:
{
"lastDate": {
"30": "04/04/2022",
"31": "04/04/2022",
"40": "02/03/2022",
"45": "02/01/2022"
},
"transactionType": {
"30": "Automatic Sell",
"31": "Automatic Sell",
"40": "Automatic Sell",
"45": "Sell"
},
"sharesTraded": {
"30": "29,198",
"31": "105,901",
"40": "25,000",
"45": "1,986"
}
}
And here is my current code:
import json
data = json.load(open("AAPL22_trades.json"))
dataa = data['sharesTraded']
dataa1 = dataa.values()
data1 = [s.replace(',', '') for s in dataa1]
data1 = [int(i) for i in data1]
open("AAPL22_trades.json", "w").write(
json.dumps(data1, indent=4))
However, I need the integer values to replace the string values. Instead, my code just replaces the entire JSON with the integers. I imagine there is something extra at the end that I'm missing because the strings have been changed to integers but applying it to the JSON is missing.
A:
You have a couple of problems. First, since you only convert the values to a list, you loose the information about which key is associated with the values. Second, you write that list back to the file, loosing all of the other data too.
You could create a new dictionary with the modified values and assign that back to the original.
import json
data = json.load(open("AAPL22_trades.json"))
data['sharesTraded'] = {k:int(v.replace(',', ''))
for k,v in data['sharesTraded'].items()}
json.dump(data, open("AAPL22_trades.json", "w"), indent=4)
A:
You could do the following:
shares_traded = data['sharesTraded']
for key, value in shares_traded.items():
shares_traded[key] = int(value.replace(',', ''))
In this way, only the values will be changed, and the key will stay as-is.
| Turn values from string to integers in JSON file python | I'm trying to change values in a JSON file from strings to integers, my issue is that the keys are row numbers so I can't call by key name (as they will change consistently). The values that need changing are within the "sharesTraded" object. Below is my JSON file:
{
"lastDate": {
"30": "04/04/2022",
"31": "04/04/2022",
"40": "02/03/2022",
"45": "02/01/2022"
},
"transactionType": {
"30": "Automatic Sell",
"31": "Automatic Sell",
"40": "Automatic Sell",
"45": "Sell"
},
"sharesTraded": {
"30": "29,198",
"31": "105,901",
"40": "25,000",
"45": "1,986"
}
}
And here is my current code:
import json
data = json.load(open("AAPL22_trades.json"))
dataa = data['sharesTraded']
dataa1 = dataa.values()
data1 = [s.replace(',', '') for s in dataa1]
data1 = [int(i) for i in data1]
open("AAPL22_trades.json", "w").write(
json.dumps(data1, indent=4))
However, I need the integer values to replace the string values. Instead, my code just replaces the entire JSON with the integers. I imagine there is something extra at the end that I'm missing because the strings have been changed to integers but applying it to the JSON is missing.
| [
"You have a couple of problems. First, since you only convert the values to a list, you loose the information about which key is associated with the values. Second, you write that list back to the file, loosing all of the other data too.\nYou could create a new dictionary with the modified values and assign that back to the original.\nimport json\n\ndata = json.load(open(\"AAPL22_trades.json\"))\ndata['sharesTraded'] = {k:int(v.replace(',', '')) \n for k,v in data['sharesTraded'].items()}\njson.dump(data, open(\"AAPL22_trades.json\", \"w\"), indent=4)\n\n",
"You could do the following:\nshares_traded = data['sharesTraded']\n\nfor key, value in shares_traded.items():\n shares_traded[key] = int(value.replace(',', ''))\n\nIn this way, only the values will be changed, and the key will stay as-is.\n"
] | [
2,
1
] | [] | [] | [
"integer",
"json",
"python",
"string"
] | stackoverflow_0074678918_integer_json_python_string.txt |
Q:
Converting TIFF images to NumPy format
I would need help to create a code that would convert my tiff images to .npy format so I can save it as .npy file.
I haven't found a good solution anywhere on this platform. Thank you in advance!
A:
Here is a code snippet that you can use to convert your TIFF images to .npy format in Python:
import numpy as np
from PIL import Image
# Load the TIFF image
im = Image.open('my_image.tiff')
# Convert the image to a numpy array
im_array = np.array(im)
# Save the array to a .npy file
np.save('my_image.npy', im_array)
You can then load the .npy file using the np.load function and use the array for your further operations.
# Load the .npy file
im_array = np.load('my_image.npy')
#Use the array for your operations
...
I hope this helps! Let me know if you have any further questions.
| Converting TIFF images to NumPy format | I would need help to create a code that would convert my tiff images to .npy format so I can save it as .npy file.
I haven't found a good solution anywhere on this platform. Thank you in advance!
| [
"Here is a code snippet that you can use to convert your TIFF images to .npy format in Python:\nimport numpy as np\nfrom PIL import Image\n\n# Load the TIFF image\nim = Image.open('my_image.tiff')\n\n# Convert the image to a numpy array\nim_array = np.array(im)\n\n# Save the array to a .npy file\nnp.save('my_image.npy', im_array)\n\nYou can then load the .npy file using the np.load function and use the array for your further operations.\n# Load the .npy file\nim_array = np.load('my_image.npy')\n#Use the array for your operations\n\n...\nI hope this helps! Let me know if you have any further questions.\n"
] | [
0
] | [] | [] | [
"numpy",
"python",
"tiff"
] | stackoverflow_0074678997_numpy_python_tiff.txt |
Q:
Is there a function that makes python read only some parts of a string and execute an operation
I'm trying to make my Python read only the first 3 digits of a string and print an answer based on the first three digits of the string.
I tried:
if str[1,2,3] = 080:
print(...)
elif str[123] =090:
print(,,,)
A:
Here is how you can achieve this in Python:
# Store the string in a variable
string = "Hello world"
# Get the first three characters of the string
first_three_chars = string[:3]
# Check if the first three characters are "080"
if first_three_chars == "080":
print("The first three characters are 080")
elif first_three_chars == "090":
print("The first three characters are 090")
else:
print("The first three characters are not 080 or 090")
In this code, we use the string[:3] syntax to get the first three characters of the string. Then, we use if and elif statements to check if the first three characters are "080" or "090".
A:
use the index between the subscript like:
str[0:3] or str [:3]
this will take the cursor from 0th index to 2nd index
| Is there a function that makes python read only some parts of a string and execute an operation | I'm trying to make my Python read only the first 3 digits of a string and print an answer based on the first three digits of the string.
I tried:
if str[1,2,3] = 080:
print(...)
elif str[123] =090:
print(,,,)
| [
"Here is how you can achieve this in Python:\n# Store the string in a variable\nstring = \"Hello world\"\n\n# Get the first three characters of the string\nfirst_three_chars = string[:3]\n\n# Check if the first three characters are \"080\"\nif first_three_chars == \"080\":\n print(\"The first three characters are 080\")\nelif first_three_chars == \"090\":\n print(\"The first three characters are 090\")\nelse:\n print(\"The first three characters are not 080 or 090\")\n\nIn this code, we use the string[:3] syntax to get the first three characters of the string. Then, we use if and elif statements to check if the first three characters are \"080\" or \"090\".\n",
"use the index between the subscript like:\nstr[0:3] or str [:3]\n\nthis will take the cursor from 0th index to 2nd index\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074679053_python.txt |
Q:
Unable to complete operation on element with key none
I'm practicing with an algorithm that generates a random number that the user needs, then keeps trying until it hits. But PySimpleGUI produces an error saying: Unable to complete operation on element with key None.
import randomimport PySimpleGUI as sg
class ChuteONumero:
def init(self):
self.valor_aleatorio = 0
self.valor_minimo = 1
self.valor_maximo = 100
self.tentar_novamente = True
def Iniciar(self):
# Layout
layout = [
[sg.Text('Seu chute', size=(39, 0))],
[sg.Input(size=(18, 0), key='ValorChute')],
[sg.Button('Chutar!')],
[sg.Output(size=(39, 10))]
]
# Criar uma janela
self.janela = sg.Window('Chute o numero!', Layout=layout)
self.GerarNumeroAleatorio()
try:
while True:
# Receber valores
self.evento, self.valores = self.janela.Read()
# Fazer alguma coisa com os vaalores
if self.evento == 'Chutar!':
self.valor_do_chute = self.valores['ValorChute']
while self.tentar_novamente == True:
if int(self.valor_do_chute) > self.valor_aleatorio:
print('Chute um valor mais baixo')
break
elif int(self.valor_do_chute) < self.valor_aleatorio:
print('Chute um valor mais alto!')
break
if int(self.valor_do_chute) == self.valor_aleatorio:
self.tentar_novamente = False
print('Parabéns, você acertou!')
break
except:
print('Não foi compreendido, apenas digite numeros de 1 a 100')
self.Iniciar()
def GerarNumeroAleatorio(self):
self.valor_aleatorio = random.randint(
self.valor_minimo, self.valor_maximo)
chute = ChuteONumero()
chute.Iniciar()
I expected a layout to open, but it does not open.
A:
Revised your code ...
import random
import PySimpleGUI as sg
class ChuteONumero:
def __init__(self):
self.valor_aleatorio = 0
self.valor_minimo = 1
self.valor_maximo = 100
self.tentar_novamente = True
def Iniciar(self):
# Layout
layout = [
[sg.Text('Your kick', size=(39, 0))],
[sg.Input(size=(18, 0), key='ValorChute')],
[sg.Button('Kick!')],
[sg.Output(size=(39, 10))]
]
# Create a window
self.janela = sg.Window('Guess The Number!', layout)
self.GerarNumeroAleatorio()
while True:
# Receive amounts
evento, valores = self.janela.read()
if evento == sg.WIN_CLOSED:
break
# Do something with the values
elif evento == 'Kick!':
try:
valor_do_chute = int(valores['ValorChute'])
except ValueError:
print('Not understood, just type numbers from 1 to 100')
continue
if valor_do_chute > self.valor_aleatorio:
print('Guess a lower value')
elif valor_do_chute < self.valor_aleatorio:
print('Kick a higher value!')
if valor_do_chute == self.valor_aleatorio:
sg.popup_ok('Congratulations, you got it right!')
break
self.janela.close()
def GerarNumeroAleatorio(self):
self.valor_aleatorio = random.randint(self.valor_minimo, self.valor_maximo)
chute = ChuteONumero()
chute.Iniciar()
| Unable to complete operation on element with key none | I'm practicing with an algorithm that generates a random number that the user needs, then keeps trying until it hits. But PySimpleGUI produces an error saying: Unable to complete operation on element with key None.
import randomimport PySimpleGUI as sg
class ChuteONumero:
def init(self):
self.valor_aleatorio = 0
self.valor_minimo = 1
self.valor_maximo = 100
self.tentar_novamente = True
def Iniciar(self):
# Layout
layout = [
[sg.Text('Seu chute', size=(39, 0))],
[sg.Input(size=(18, 0), key='ValorChute')],
[sg.Button('Chutar!')],
[sg.Output(size=(39, 10))]
]
# Criar uma janela
self.janela = sg.Window('Chute o numero!', Layout=layout)
self.GerarNumeroAleatorio()
try:
while True:
# Receber valores
self.evento, self.valores = self.janela.Read()
# Fazer alguma coisa com os vaalores
if self.evento == 'Chutar!':
self.valor_do_chute = self.valores['ValorChute']
while self.tentar_novamente == True:
if int(self.valor_do_chute) > self.valor_aleatorio:
print('Chute um valor mais baixo')
break
elif int(self.valor_do_chute) < self.valor_aleatorio:
print('Chute um valor mais alto!')
break
if int(self.valor_do_chute) == self.valor_aleatorio:
self.tentar_novamente = False
print('Parabéns, você acertou!')
break
except:
print('Não foi compreendido, apenas digite numeros de 1 a 100')
self.Iniciar()
def GerarNumeroAleatorio(self):
self.valor_aleatorio = random.randint(
self.valor_minimo, self.valor_maximo)
chute = ChuteONumero()
chute.Iniciar()
I expected a layout to open, but it does not open.
| [
"Revised your code ...\nimport random\nimport PySimpleGUI as sg\n\nclass ChuteONumero:\n\n def __init__(self):\n self.valor_aleatorio = 0\n self.valor_minimo = 1\n self.valor_maximo = 100\n self.tentar_novamente = True\n\n def Iniciar(self):\n # Layout\n layout = [\n [sg.Text('Your kick', size=(39, 0))],\n [sg.Input(size=(18, 0), key='ValorChute')],\n [sg.Button('Kick!')],\n [sg.Output(size=(39, 10))]\n ]\n # Create a window\n self.janela = sg.Window('Guess The Number!', layout)\n self.GerarNumeroAleatorio()\n while True:\n # Receive amounts\n evento, valores = self.janela.read()\n if evento == sg.WIN_CLOSED:\n break\n # Do something with the values\n elif evento == 'Kick!':\n try:\n valor_do_chute = int(valores['ValorChute'])\n except ValueError:\n print('Not understood, just type numbers from 1 to 100')\n continue\n if valor_do_chute > self.valor_aleatorio:\n print('Guess a lower value')\n elif valor_do_chute < self.valor_aleatorio:\n print('Kick a higher value!')\n if valor_do_chute == self.valor_aleatorio:\n sg.popup_ok('Congratulations, you got it right!')\n break\n self.janela.close()\n\n def GerarNumeroAleatorio(self):\n self.valor_aleatorio = random.randint(self.valor_minimo, self.valor_maximo)\n\nchute = ChuteONumero()\nchute.Iniciar()\n\n"
] | [
0
] | [] | [] | [
"pysimplegui",
"python"
] | stackoverflow_0074678831_pysimplegui_python.txt |
Q:
How do I copy all folder contents from one location to another - Python?
I have been trying to make a python file that will copy contents from one folder to another. I would like it to work on any Windows system that I run it on. It must copy ALL contents, images, videos, etc.
I have tried using this shutil code I found online, however it has not worked and shows the message:* Error occurred while copying file.*
import shutil
# Source path
source = "%USERPROFILE%/Downloads/Pictures"
# Destination path
destination = "%USERPROFILE%/Downloads/Copied_pictures"
# Copy the content of
# source to destination
try:
shutil.copy(source, destination)
print("File copied successfully.")
# If source and destination are same
except shutil.SameFileError:
print("Source and destination represents the same file.")
# If there is any permission issue
except PermissionError:
print("Permission denied.")
# For other errors
except:
print("Error occurred while copying file.")
Please help me resolve this issue, any support is highly appreciated.
A:
To copy all the contents of a folder, you can use the shutil.copytree method instead of shutil.copy. This method will copy all the contents of the source folder, including any sub-folders and files, to the destination folder.
Here is an example of how you can use shutil.copytree to copy the contents of a folder:
import shutil
# Source path
source = "%USERPROFILE%/Downloads/Pictures"
# Destination path
destination = "%USERPROFILE%/Downloads/Copied_pictures"
# Copy the content of
# source to destination
try:
shutil.copytree(source, destination)
print("Files copied successfully.")
# If source and destination are same
except shutil.Error as e:
print("Error: %s" % e)
# If there is any permission issue
except PermissionError:
print("Permission denied.")
# For other errors
except:
print("Error occurred while copying files.")
Note that you need to catch the Error exception instead of the SameFileError exception when using shutil.copytree, as it can raise different types of errors. You can also specify additional options such as whether to ignore certain types of files or to preserve the file permissions when copying the files. Check the documentation of shutil.copytree for more details.
| How do I copy all folder contents from one location to another - Python? | I have been trying to make a python file that will copy contents from one folder to another. I would like it to work on any Windows system that I run it on. It must copy ALL contents, images, videos, etc.
I have tried using this shutil code I found online, however it has not worked and shows the message:* Error occurred while copying file.*
import shutil
# Source path
source = "%USERPROFILE%/Downloads/Pictures"
# Destination path
destination = "%USERPROFILE%/Downloads/Copied_pictures"
# Copy the content of
# source to destination
try:
shutil.copy(source, destination)
print("File copied successfully.")
# If source and destination are same
except shutil.SameFileError:
print("Source and destination represents the same file.")
# If there is any permission issue
except PermissionError:
print("Permission denied.")
# For other errors
except:
print("Error occurred while copying file.")
Please help me resolve this issue, any support is highly appreciated.
| [
"To copy all the contents of a folder, you can use the shutil.copytree method instead of shutil.copy. This method will copy all the contents of the source folder, including any sub-folders and files, to the destination folder.\nHere is an example of how you can use shutil.copytree to copy the contents of a folder:\nimport shutil\n\n# Source path\nsource = \"%USERPROFILE%/Downloads/Pictures\"\n\n# Destination path\ndestination = \"%USERPROFILE%/Downloads/Copied_pictures\"\n\n# Copy the content of\n# source to destination\n\ntry:\n shutil.copytree(source, destination)\n print(\"Files copied successfully.\")\n\n# If source and destination are same\nexcept shutil.Error as e:\n print(\"Error: %s\" % e)\n\n# If there is any permission issue\nexcept PermissionError:\n print(\"Permission denied.\")\n\n# For other errors\nexcept:\n print(\"Error occurred while copying files.\")\n\nNote that you need to catch the Error exception instead of the SameFileError exception when using shutil.copytree, as it can raise different types of errors. You can also specify additional options such as whether to ignore certain types of files or to preserve the file permissions when copying the files. Check the documentation of shutil.copytree for more details.\n"
] | [
0
] | [] | [] | [
"copy",
"copy_paste",
"python",
"shutil",
"windows"
] | stackoverflow_0074679048_copy_copy_paste_python_shutil_windows.txt |
Q:
Last iteration of loop not completely executed
I am currently writing a short script to scrape all outlets from a retailer in my home country. I first scrape all possible postal codes from the website of the postal service, after which I enter these one by one automatically with Selenium in their location finder. After this, I check whether the found locations are already in my result DataFrame and I add the ones that I did not find yet. Here is the code I used:
# Define options and webdriver
options = webdriver.ChromeOptions()
options.add_argument('--ignore-certificate-errors')
options.add_argument('--incognito')
prefs = {"profile.default_content_setting_values.geolocation" :2}
options.add_experimental_option("prefs",prefs)
driver = webdriver.Chrome("path", options=options)
# Get postal codes
driver.get("website post office")
soup = BeautifulSoup(driver.page_source)
postal_codes = [code.string for code in soup.find_all("tag", class_="class")]
# Get retail location
driver.get("retail website url")
option1_button = driver.find_element(By.XPATH,"xpath")
driver.execute_script("arguments[0].click();", option1_button)
option2_button = driver.find_element(By.XPATH,"xpath")
driver.execute_script("arguments[0].click();", option2_button)
outlets = pd.DataFrame(columns = ["Name","Address"])
for i in range(len(postal_codes)):
searchbar = driver.find_element(By.XPATH,"xpath")
searchbar.clear()
searchbar.send_keys(postal_codes[i])
searchbar.send_keys(Keys.RETURN)
soup = BeautifulSoup(driver.page_source)
names = [name.strong.string for name in soup.find_all("div", class_="class")]
addresses = [address.div.string for address in soup.find_all("div", class_="class")]
for j in range(len(addresses)):
if addresses[j] in outlets["Address"].values:
print(addresses[j] + " added already")
else:
outlets = outlets.append({"Name": names[j],"Address": addresses[j]}, ignore_index=True)
I am managing to scrape all of the locations except for the last postal code. The script perfectly manipulates the location finder to open up the retail locations for all postal codes except for the last one in the postal_code list.
For this last postal code in the postal_code list, it opens up the webpage correctly and enters the correct postal code, but does not seem to register the addresses and the names for the outlets. When I open up the addresses list and the names list, they still contain the elements of the postal code before the last one. It seems like the loop is not entirely completing. Can someone tell me what the problem is and how to fix this? Thank you!
A:
import requests
import pandas as pd
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0'
}
def main(url):
with requests.Session() as req:
req.headers.update(headers)
params = {
'q': '9000',
'filter': [
'KBC_PALO',
'CBC_PALO',
],
'language': 'nl',
}
r = req.get(url, params=params)
df = pd.DataFrame(r.json()['branches'])
print(df)
if __name__ == "__main__":
main('https://www.kbc.be/X9Y-P/elasticsearch-service/api/v3/branches/search')
Output:
branchId branchName ... saturdayOH cashCd
0 ORG7441 KBC BANK GENT KOUTER ... N;09.00.00;12.00.00 2
1 ORG7426 KBC BANK GENT DE STERRE ... N;09.00.00;12.00.00 2
2 ORG6304 KBC BANK GENT GRAVENSTEEN ... NaN 3
3 ORG3225 KBC BANK GENT WATERSPORTBAAN ... N;09.00.00;12.00.00 3
4 ORG7446 KBC BANK GENTBRUGGE ... NaN 3
5 ORG7447 KBC BANK WONDELGEM ... NaN 3
6 ORG7434 KBC BANK ZWIJNAARDE ... NaN 3
7 ORG7439 KBC BANK MARIAKERKE ... NaN 1
8 ORG3407 KBC BANK OOSTAKKER ... NaN 3
9 ORG3395 KBC BANK ST.-AMANDSBERG ... N;09.00.00;12.00.00 2
[10 rows x 20 columns]
| Last iteration of loop not completely executed | I am currently writing a short script to scrape all outlets from a retailer in my home country. I first scrape all possible postal codes from the website of the postal service, after which I enter these one by one automatically with Selenium in their location finder. After this, I check whether the found locations are already in my result DataFrame and I add the ones that I did not find yet. Here is the code I used:
# Define options and webdriver
options = webdriver.ChromeOptions()
options.add_argument('--ignore-certificate-errors')
options.add_argument('--incognito')
prefs = {"profile.default_content_setting_values.geolocation" :2}
options.add_experimental_option("prefs",prefs)
driver = webdriver.Chrome("path", options=options)
# Get postal codes
driver.get("website post office")
soup = BeautifulSoup(driver.page_source)
postal_codes = [code.string for code in soup.find_all("tag", class_="class")]
# Get retail location
driver.get("retail website url")
option1_button = driver.find_element(By.XPATH,"xpath")
driver.execute_script("arguments[0].click();", option1_button)
option2_button = driver.find_element(By.XPATH,"xpath")
driver.execute_script("arguments[0].click();", option2_button)
outlets = pd.DataFrame(columns = ["Name","Address"])
for i in range(len(postal_codes)):
searchbar = driver.find_element(By.XPATH,"xpath")
searchbar.clear()
searchbar.send_keys(postal_codes[i])
searchbar.send_keys(Keys.RETURN)
soup = BeautifulSoup(driver.page_source)
names = [name.strong.string for name in soup.find_all("div", class_="class")]
addresses = [address.div.string for address in soup.find_all("div", class_="class")]
for j in range(len(addresses)):
if addresses[j] in outlets["Address"].values:
print(addresses[j] + " added already")
else:
outlets = outlets.append({"Name": names[j],"Address": addresses[j]}, ignore_index=True)
I am managing to scrape all of the locations except for the last postal code. The script perfectly manipulates the location finder to open up the retail locations for all postal codes except for the last one in the postal_code list.
For this last postal code in the postal_code list, it opens up the webpage correctly and enters the correct postal code, but does not seem to register the addresses and the names for the outlets. When I open up the addresses list and the names list, they still contain the elements of the postal code before the last one. It seems like the loop is not entirely completing. Can someone tell me what the problem is and how to fix this? Thank you!
| [
"import requests\nimport pandas as pd\n\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0'\n}\n\n\ndef main(url):\n with requests.Session() as req:\n req.headers.update(headers)\n params = {\n 'q': '9000',\n 'filter': [\n 'KBC_PALO',\n 'CBC_PALO',\n ],\n 'language': 'nl',\n }\n r = req.get(url, params=params)\n df = pd.DataFrame(r.json()['branches'])\n print(df)\n\n\nif __name__ == \"__main__\":\n main('https://www.kbc.be/X9Y-P/elasticsearch-service/api/v3/branches/search')\n\nOutput:\n branchId branchName ... saturdayOH cashCd\n0 ORG7441 KBC BANK GENT KOUTER ... N;09.00.00;12.00.00 2\n1 ORG7426 KBC BANK GENT DE STERRE ... N;09.00.00;12.00.00 2\n2 ORG6304 KBC BANK GENT GRAVENSTEEN ... NaN 3\n3 ORG3225 KBC BANK GENT WATERSPORTBAAN ... N;09.00.00;12.00.00 3\n4 ORG7446 KBC BANK GENTBRUGGE ... NaN 3\n5 ORG7447 KBC BANK WONDELGEM ... NaN 3\n6 ORG7434 KBC BANK ZWIJNAARDE ... NaN 3\n7 ORG7439 KBC BANK MARIAKERKE ... NaN 1\n8 ORG3407 KBC BANK OOSTAKKER ... NaN 3\n9 ORG3395 KBC BANK ST.-AMANDSBERG ... N;09.00.00;12.00.00 2\n\n[10 rows x 20 columns]\n\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"python",
"selenium",
"selenium_chromedriver",
"web_scraping"
] | stackoverflow_0074678059_beautifulsoup_python_selenium_selenium_chromedriver_web_scraping.txt |
Q:
I'm getting an "Execution Timed Out" error?
I'm trying to improve my algorithm skills. When I run my code, I get an "Execution Timed Out" error.
Pseudocode
[This is writen in pseudocode]
if(number is even) number = number / 2
if(number is odd) number = 3*number + 1
My Code
def hotpo(n):
calculator = 0
while n >= 1:
if n % 2 == 0:
n = n / 2
else:
n = 3 * n + 1
calculator = calculator + 1
return calculator
A:
you are dividing number by 2 if number is even but multiplying it by 3 and adding 1 into it.
so for any number it will keep doing this
2,1,4,2,1,4,2,1,4,2,1,4,2,1,4,.....
you just have to change condition to n>1 in while loop
because at last 2 will come and it got divided by 2, then n becomes 1 then again it will consider 1 as odd as per your condition then again 1*3+1=4 and again 4/2=2 and so on...
I hope you understood..
| I'm getting an "Execution Timed Out" error? | I'm trying to improve my algorithm skills. When I run my code, I get an "Execution Timed Out" error.
Pseudocode
[This is writen in pseudocode]
if(number is even) number = number / 2
if(number is odd) number = 3*number + 1
My Code
def hotpo(n):
calculator = 0
while n >= 1:
if n % 2 == 0:
n = n / 2
else:
n = 3 * n + 1
calculator = calculator + 1
return calculator
| [
"you are dividing number by 2 if number is even but multiplying it by 3 and adding 1 into it.\nso for any number it will keep doing this\n2,1,4,2,1,4,2,1,4,2,1,4,2,1,4,.....\nyou just have to change condition to n>1 in while loop\nbecause at last 2 will come and it got divided by 2, then n becomes 1 then again it will consider 1 as odd as per your condition then again 1*3+1=4 and again 4/2=2 and so on...\nI hope you understood..\n"
] | [
0
] | [] | [] | [
"algorithm",
"python"
] | stackoverflow_0074678975_algorithm_python.txt |
Q:
Selenium with python : failed to click on the button to switch the language of a website
I am trying to go to the french version of this website : https://ciqual.anses.fr/. I tried to click on the button 'FR' but nothing happens, I am still on the english page.
Here is my code :
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument("--headless")
driver = webdriver.Chrome("chromedriver", options=chrome_options)
driver.get('https://ciqual.anses.fr/')
switch_to_french = driver.find_element("xpath", "//a[@id='fr-switch']")
ActionChains(driver).move_to_element(switch_to_french).click()
#to see what happened :
from IPython.display import Image
png = driver.get_screenshot_as_png()
Image(png, width='500')
#I am still on the english website
Please help !
A:
Try this:
switch_to_french = driver.find_element(By.XPATH, "//a[@id='fr-switch']")
driver.execute_script("arguments[0].click();", switch_to_french)
| Selenium with python : failed to click on the button to switch the language of a website | I am trying to go to the french version of this website : https://ciqual.anses.fr/. I tried to click on the button 'FR' but nothing happens, I am still on the english page.
Here is my code :
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument("--headless")
driver = webdriver.Chrome("chromedriver", options=chrome_options)
driver.get('https://ciqual.anses.fr/')
switch_to_french = driver.find_element("xpath", "//a[@id='fr-switch']")
ActionChains(driver).move_to_element(switch_to_french).click()
#to see what happened :
from IPython.display import Image
png = driver.get_screenshot_as_png()
Image(png, width='500')
#I am still on the english website
Please help !
| [
"Try this:\nswitch_to_french = driver.find_element(By.XPATH, \"//a[@id='fr-switch']\")\ndriver.execute_script(\"arguments[0].click();\", switch_to_french)\n\n"
] | [
0
] | [] | [] | [
"click",
"python",
"selenium"
] | stackoverflow_0074678963_click_python_selenium.txt |
Q:
Convert all images in a pandas dataframe column to grayscale
I have a column of a pandas dataframe with 25 thousand images, and I want to convert the color of all of them to grayscale.
What would be the simplest way to do this?
I know how to convert the color, which I must use a loop and do the conversion with numpy or opencv, but I don't know how to do this loop with a column of the dataframe.
A:
One way to convert the color of images in a pandas dataframe is to use the apply method on the column containing the image data. This method allows you to apply a custom function to each element of the column.
For example, if your dataframe has a column called 'images' containing the image data, you could convert the color of the images to grayscale using the following code:
def grayscale(image):
return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
df['images'] = df['images'].apply(grayscale)
A:
One way to loop through the images in the column and convert them to grayscale would be to use the apply method of the pandas dataframe. Here is an example:
import numpy as np
import cv2
# Convert an image to grayscale
def to_grayscale(image):
return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Loop through the images in the column and convert them to grayscale
df['grayscale_images'] = df['images'].apply(to_grayscale)
This code will apply the to_grayscale function to each image in the images column of the dataframe, and store the resulting grayscale images in a new column called grayscale_images.
Alternatively, you could also use a for loop to iterate through the rows of the dataframe and convert the images in the images column to grayscale. Here is an example:
import numpy as np
import cv2
# Create a new column for the grayscale images
df['grayscale_images'] = np.nan
# Loop through the rows of the dataframe
for i, row in df.iterrows():
# Convert the image to grayscale
grayscale_image = cv2.cvtColor(row['images'], cv2.COLOR_BGR2GRAY)
# Store the grayscale image in the new column
df.at[i, 'grayscale_images'] = grayscale_image
Both of these approaches will loop through the images in the images column and convert them to grayscale.
| Convert all images in a pandas dataframe column to grayscale | I have a column of a pandas dataframe with 25 thousand images, and I want to convert the color of all of them to grayscale.
What would be the simplest way to do this?
I know how to convert the color, which I must use a loop and do the conversion with numpy or opencv, but I don't know how to do this loop with a column of the dataframe.
| [
"One way to convert the color of images in a pandas dataframe is to use the apply method on the column containing the image data. This method allows you to apply a custom function to each element of the column.\nFor example, if your dataframe has a column called 'images' containing the image data, you could convert the color of the images to grayscale using the following code:\ndef grayscale(image):\n return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\ndf['images'] = df['images'].apply(grayscale)\n\n",
"One way to loop through the images in the column and convert them to grayscale would be to use the apply method of the pandas dataframe. Here is an example:\nimport numpy as np\nimport cv2\n\n# Convert an image to grayscale\ndef to_grayscale(image):\n return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\n# Loop through the images in the column and convert them to grayscale\ndf['grayscale_images'] = df['images'].apply(to_grayscale)\n\nThis code will apply the to_grayscale function to each image in the images column of the dataframe, and store the resulting grayscale images in a new column called grayscale_images.\nAlternatively, you could also use a for loop to iterate through the rows of the dataframe and convert the images in the images column to grayscale. Here is an example:\nimport numpy as np\nimport cv2\n\n# Create a new column for the grayscale images\ndf['grayscale_images'] = np.nan\n\n# Loop through the rows of the dataframe\nfor i, row in df.iterrows():\n # Convert the image to grayscale\n grayscale_image = cv2.cvtColor(row['images'], cv2.COLOR_BGR2GRAY)\n # Store the grayscale image in the new column\n df.at[i, 'grayscale_images'] = grayscale_image\n\nBoth of these approaches will loop through the images in the images column and convert them to grayscale.\n"
] | [
1,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074679121_pandas_python.txt |
Q:
parsing email threads in python
tl;dr questions:
how to parse MIME content into threads (thus lists of individual replies & forwards)
any libraries that do that?
Does Mime-Version: 1.0 standardize the way threads are represented?
I'm analyzing enron dataset (https://www.cs.cmu.edu/~./enron/, you can also browse the documents here: http://www.enron-mail.com/email/) This dataset is a collection of ~500K emails. Emails are represented as Mime-Version: 1.0 files, there are no attachments.
This is a typical file:
Message-ID: <4250772.1075857358369.JavaMail.evans@thyme>^M
Date: Tue, 12 Dec 2000 09:19:00 -0800 (PST)^M
From: david.portz@enron.com^M
To: clint.dean@enron.com^M
Subject: City of Bryan Dec parking transactions^M
Cc: doug.gilbert-smith@enron.com, elizabeth.sager@enron.com, ^M
melissa.murphy@enron.com^M
Mime-Version: 1.0^M
Content-Type: text/plain; charset=us-ascii^M
Content-Transfer-Encoding: 7bit^M
Bcc: doug.gilbert-smith@enron.com, elizabeth.sager@enron.com, ^M
melissa.murphy@enron.com^M
X-From: David Portz^M
X-To: Clint Dean^M
X-cc: Doug Gilbert-Smith, Elizabeth Sager, Melissa Ann Murphy^M
X-bcc: ^M
X-Folder: \Clint_Dean_Dec2000\Notes Folders\Notes inbox^M
X-Origin: Dean-C^M
X-FileName: cdean.nsf^M
^M
Following discussions with you and Doug, attached is a draft parking
transaction agreement for your review and, if acceptable, for circualtion to
the counterparty. Please call me with any questions. --David
There is a handy, widely adopted python library that makes life easier in parsing those kind of files:
import email
import email.policy
parsed_email = email.message_from_string(open(filename, 'r').read(), policy=email.policy.default)
body = parsed_email.get_payload()
from_field = parsed_email['From']
...
However, I didn't find a reliable way to further parse email content to threads: sub_email_1 -> sub_email_2 -> ... > sub_email_n, etc. get_payload returns everything, all together.
Here is an example of MIME with threads: https://justpaste.it/bf5zr (the file is 233 lines, so pasted separately).
There is clearly a thread:
Christi L Nicolay sent email on 04/30/2001 02:20 PM
later Christi L Nicolay replied to its own email on 05/03/2001 09:23 PM
Lloyd Will replied to that thread on 05/03/2001 09:26 PM
Christi L Nicolay replied on 05/07/2001 11:47 AM
Tom May forwarded the whole thread on Mon, 7 May 2001 06:58:00 -0700
Any library / existing solution that could do that?
Looking at glance into the data, I got impression that there are numerous tiny variants how those threads are organized. Sometimes there are nested > > fields accompanying sub-emails, sometimes there is ---Original Message--- message, etc. It seems way less defined than MIME header fields.
I can write some regex-backed python script that parses one email or another, but it will not work universally for the whole Enron dataset.
Some more examples of threads from the Enron dataset:
http://www.enron-mail.com/email/mann-k/discussion_threads/FW_Salmon_Energy_Turbine_Agreement_5.html
http://www.enron-mail.com/email/brawner-s/discussion_threads/Fw_Fw_TIGHT_SKIRTS_AND_TEXANS_2.html
http://www.enron-mail.com/email/brawner-s/_sent_mail/Fw_Time_Friends_3.html
That led me to question #3: whether the mime format standardizes threads at all.
A:
Here are a code sample hope it will be usefull
import email
email_message: email.message.Message = email.message_from_bytes(raw_email_body)
# or as in your example
# email.message_from_string(open(filename, 'r').read(), policy=email.policy.default)
message_parts = list(message.walk())
for part in message_parts:
... # write some logic here
A:
The MIME (Multipurpose Internet Mail Extensions) standard does not specify how threads should be represented in emails. MIME is a format for encoding various types of data in email messages, such as text, images, and attachments, but does not define the structure or organization of the message itself.
Therefore, parsing threads from a MIME-formatted email would require custom logic and may not be straightforward due to the various ways in which threads can be represented in email messages. Some common approaches to parsing threads from emails include using regular expressions to identify common patterns in the email content, or using natural language processing techniques to analyze the content and identify relationships between messages.
It's worth noting that some email clients and services, such as Gmail, may add their own custom headers to emails to indicate threading information. In these cases, it may be possible to parse thread information from the email headers rather than the content itself. However, this would depend on the specific headers used by the email client or service in question.
| parsing email threads in python | tl;dr questions:
how to parse MIME content into threads (thus lists of individual replies & forwards)
any libraries that do that?
Does Mime-Version: 1.0 standardize the way threads are represented?
I'm analyzing enron dataset (https://www.cs.cmu.edu/~./enron/, you can also browse the documents here: http://www.enron-mail.com/email/) This dataset is a collection of ~500K emails. Emails are represented as Mime-Version: 1.0 files, there are no attachments.
This is a typical file:
Message-ID: <4250772.1075857358369.JavaMail.evans@thyme>^M
Date: Tue, 12 Dec 2000 09:19:00 -0800 (PST)^M
From: david.portz@enron.com^M
To: clint.dean@enron.com^M
Subject: City of Bryan Dec parking transactions^M
Cc: doug.gilbert-smith@enron.com, elizabeth.sager@enron.com, ^M
melissa.murphy@enron.com^M
Mime-Version: 1.0^M
Content-Type: text/plain; charset=us-ascii^M
Content-Transfer-Encoding: 7bit^M
Bcc: doug.gilbert-smith@enron.com, elizabeth.sager@enron.com, ^M
melissa.murphy@enron.com^M
X-From: David Portz^M
X-To: Clint Dean^M
X-cc: Doug Gilbert-Smith, Elizabeth Sager, Melissa Ann Murphy^M
X-bcc: ^M
X-Folder: \Clint_Dean_Dec2000\Notes Folders\Notes inbox^M
X-Origin: Dean-C^M
X-FileName: cdean.nsf^M
^M
Following discussions with you and Doug, attached is a draft parking
transaction agreement for your review and, if acceptable, for circualtion to
the counterparty. Please call me with any questions. --David
There is a handy, widely adopted python library that makes life easier in parsing those kind of files:
import email
import email.policy
parsed_email = email.message_from_string(open(filename, 'r').read(), policy=email.policy.default)
body = parsed_email.get_payload()
from_field = parsed_email['From']
...
However, I didn't find a reliable way to further parse email content to threads: sub_email_1 -> sub_email_2 -> ... > sub_email_n, etc. get_payload returns everything, all together.
Here is an example of MIME with threads: https://justpaste.it/bf5zr (the file is 233 lines, so pasted separately).
There is clearly a thread:
Christi L Nicolay sent email on 04/30/2001 02:20 PM
later Christi L Nicolay replied to its own email on 05/03/2001 09:23 PM
Lloyd Will replied to that thread on 05/03/2001 09:26 PM
Christi L Nicolay replied on 05/07/2001 11:47 AM
Tom May forwarded the whole thread on Mon, 7 May 2001 06:58:00 -0700
Any library / existing solution that could do that?
Looking at glance into the data, I got impression that there are numerous tiny variants how those threads are organized. Sometimes there are nested > > fields accompanying sub-emails, sometimes there is ---Original Message--- message, etc. It seems way less defined than MIME header fields.
I can write some regex-backed python script that parses one email or another, but it will not work universally for the whole Enron dataset.
Some more examples of threads from the Enron dataset:
http://www.enron-mail.com/email/mann-k/discussion_threads/FW_Salmon_Energy_Turbine_Agreement_5.html
http://www.enron-mail.com/email/brawner-s/discussion_threads/Fw_Fw_TIGHT_SKIRTS_AND_TEXANS_2.html
http://www.enron-mail.com/email/brawner-s/_sent_mail/Fw_Time_Friends_3.html
That led me to question #3: whether the mime format standardizes threads at all.
| [
"Here are a code sample hope it will be usefull\nimport email\n\nemail_message: email.message.Message = email.message_from_bytes(raw_email_body)\n# or as in your example\n# email.message_from_string(open(filename, 'r').read(), policy=email.policy.default)\n\nmessage_parts = list(message.walk())\nfor part in message_parts:\n ... # write some logic here\n\n",
"The MIME (Multipurpose Internet Mail Extensions) standard does not specify how threads should be represented in emails. MIME is a format for encoding various types of data in email messages, such as text, images, and attachments, but does not define the structure or organization of the message itself.\nTherefore, parsing threads from a MIME-formatted email would require custom logic and may not be straightforward due to the various ways in which threads can be represented in email messages. Some common approaches to parsing threads from emails include using regular expressions to identify common patterns in the email content, or using natural language processing techniques to analyze the content and identify relationships between messages.\nIt's worth noting that some email clients and services, such as Gmail, may add their own custom headers to emails to indicate threading information. In these cases, it may be possible to parse thread information from the email headers rather than the content itself. However, this would depend on the specific headers used by the email client or service in question.\n"
] | [
0,
0
] | [] | [] | [
"email",
"email_parsing",
"mime",
"parsing",
"python"
] | stackoverflow_0074568352_email_email_parsing_mime_parsing_python.txt |
Q:
I can't install matplotlib using pip
I am totally new to the Python and I wanted to use a matplotlib for my school project. I tried to install it using pip (pip install matplotlib), but I got a really long and bad-looking error and I don't know what to do... I was trying to upgrade pip and setuptools, but i didn't help. I don't understand this issue, because I installed for example numpy without any problem. Can anybody help me?
ERROR: Command errored out with exit status 1:
command: 'c:\program files\python38\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\marci_000\\AppData\\Local\\Temp\\pip-install-6ze8b_ec\\matplotlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\marci_000\\AppData\\Local\\Temp\\pip-install-6ze8b_ec\\matplotlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\marci_000\AppData\Local\Temp\pip-install-6ze8b_ec\matplotlib\pip-egg-info'
cwd: C:\Users\marci_000\AppData\Local\Temp\pip-install-6ze8b_ec\matplotlib\
Complete output (228 lines):
================================================================================
Edit setup.cfg to change the build options
BUILDING MATPLOTLIB
matplotlib: yes [3.1.1]
python: yes [3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC
v.1916 64 bit (AMD64)]]
platform: yes [win32]
OPTIONAL SUBPACKAGES
sample_data: yes [installing]
tests: no [skipping due to configuration]
OPTIONAL BACKEND EXTENSIONS
agg: yes [installing]
tkagg: yes [installing; run-time loading from Python Tcl/Tk]
macosx: no [Mac OS-X only]
OPTIONAL PACKAGE DATA
dlls: no [skipping due to configuration]
Could not locate executable g77
Could not locate executable f77
Could not locate executable ifort
Could not locate executable ifl
Could not locate executable f90
Could not locate executable DF
Could not locate executable efl
Could not locate executable gfortran
Could not locate executable f95
Could not locate executable g95
Could not locate executable efort
Could not locate executable efc
Could not locate executable flang
don't know how to compile Fortran code on platform 'nt'
'svnversion' is not recognized as an internal or external command,
operable program or batch file.
non-existing path in 'numpy\\distutils': 'site.cfg'
Running from numpy source directory.
C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py:418: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates
run_build = parse_setuppy_commands()
C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning:
Optimized (vendor) Blas libraries are not found.
Falls back to netlib Blas library which has worse performance.
A better performance should be easily gained by switching
Blas library.
self.calc_info()
C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
self.calc_info()
C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
self.calc_info()
C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:1712: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
if getattr(self, '_calc_info_{}'.format(lapack))():
C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:1712: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
if getattr(self, '_calc_info_{}'.format(lapack))():
c:\program files\python38\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
Traceback (most recent call last):
File "c:\program files\python38\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "c:\program files\python38\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "c:\program files\python38\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "c:\program files\python38\lib\site-packages\setuptools\command\bdist_egg.py", line 163, in run
self.run_command("egg_info")
File "c:\program files\python38\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\program files\python38\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\egg_info.py", line 26, in run
File "c:\program files\python38\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\program files\python38\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\build_src.py", line 142, in run
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\build_src.py", line 150, in build_sources
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\build_src.py", line 267, in build_py_modules_sources
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\misc_util.py", line 2270, in generate_config_py
File "c:\program files\python38\lib\distutils\dir_util.py", line 70, in mkpath
os.mkdir(head, mode)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 310, in wrap
path = self._remap_input(name, path, *args, **kw)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 452, in _remap_input
self._violation(operation, os.path.realpath(path), *args, **kw)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 407, in _violation
raise SandboxViolation(operation, args, kw)
setuptools.sandbox.SandboxViolation: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {}
The package setup script has attempted to modify files on your system
that are not within the EasyInstall build area, and has been aborted.
This package cannot be safely installed by EasyInstall, and may not
support alternate installation locations even if you run its setup
script by hand. Please inform the package's author and the EasyInstall
maintainers to find out if a fix or workaround is available.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules
yield saved
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context
yield
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 443, in <module>
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 435, in setup_package
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\core.py", line 171, in setup
File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "c:\program files\python38\lib\distutils\core.py", line 163, in setup
raise SystemExit("error: " + str(msg))
SystemExit: error: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {}
The package setup script has attempted to modify files on your system
that are not within the EasyInstall build area, and has been aborted.
This package cannot be safely installed by EasyInstall, and may not
support alternate installation locations even if you run its setup
script by hand. Please inform the package's author and the EasyInstall
maintainers to find out if a fix or workaround is available.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 1144, in run_setup
run_setup(setup_script, args)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 253, in run_setup
raise
File "c:\program files\python38\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context
yield
File "c:\program files\python38\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 166, in save_modules
saved_exc.resume()
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 141, in resume
six.reraise(type, exc, self._tb)
File "c:\program files\python38\lib\site-packages\setuptools\_vendor\six.py", line 685, in reraise
raise value.with_traceback(tb)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules
yield saved
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context
yield
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 443, in <module>
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 435, in setup_package
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\core.py", line 171, in setup
File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "c:\program files\python38\lib\distutils\core.py", line 163, in setup
raise SystemExit("error: " + str(msg))
SystemExit: error: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {}
The package setup script has attempted to modify files on your system
that are not within the EasyInstall build area, and has been aborted.
This package cannot be safely installed by EasyInstall, and may not
support alternate installation locations even if you run its setup
script by hand. Please inform the package's author and the EasyInstall
maintainers to find out if a fix or workaround is available.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\marci_000\AppData\Local\Temp\pip-install-6ze8b_ec\matplotlib\setup.py", line 262, in <module>
setup(
File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 144, in setup
_install_setup_requires(attrs)
File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 139, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "c:\program files\python38\lib\site-packages\setuptools\dist.py", line 717, in fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File "c:\program files\python38\lib\site-packages\pkg_resources\__init__.py", line 780, in resolve
dist = best[req.key] = env.best_match(
File "c:\program files\python38\lib\site-packages\pkg_resources\__init__.py", line 1065, in best_match
return self.obtain(req, installer)
File "c:\program files\python38\lib\site-packages\pkg_resources\__init__.py", line 1077, in obtain
return installer(requirement)
File "c:\program files\python38\lib\site-packages\setuptools\dist.py", line 787, in fetch_build_egg
return cmd.easy_install(req)
File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 679, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 705, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 890, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 1158, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 1146, in run_setup
raise DistutilsError("Setup script exited with %s" % (v.args[0],))
distutils.errors.DistutilsError: Setup script exited with error: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {}
The package setup script has attempted to modify files on your system
that are not within the EasyInstall build area, and has been aborted.
This package cannot be safely installed by EasyInstall, and may not
support alternate installation locations even if you run its setup
script by hand. Please inform the package's author and the EasyInstall
maintainers to find out if a fix or workaround is available.
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
A:
Add your python to path and try running command in command prompt.
A:
try to run
python -m pip install -U pip
python -m pip install -U matplotlib
while installing Matplotlib !
| I can't install matplotlib using pip | I am totally new to the Python and I wanted to use a matplotlib for my school project. I tried to install it using pip (pip install matplotlib), but I got a really long and bad-looking error and I don't know what to do... I was trying to upgrade pip and setuptools, but i didn't help. I don't understand this issue, because I installed for example numpy without any problem. Can anybody help me?
ERROR: Command errored out with exit status 1:
command: 'c:\program files\python38\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\marci_000\\AppData\\Local\\Temp\\pip-install-6ze8b_ec\\matplotlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\marci_000\\AppData\\Local\\Temp\\pip-install-6ze8b_ec\\matplotlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\marci_000\AppData\Local\Temp\pip-install-6ze8b_ec\matplotlib\pip-egg-info'
cwd: C:\Users\marci_000\AppData\Local\Temp\pip-install-6ze8b_ec\matplotlib\
Complete output (228 lines):
================================================================================
Edit setup.cfg to change the build options
BUILDING MATPLOTLIB
matplotlib: yes [3.1.1]
python: yes [3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC
v.1916 64 bit (AMD64)]]
platform: yes [win32]
OPTIONAL SUBPACKAGES
sample_data: yes [installing]
tests: no [skipping due to configuration]
OPTIONAL BACKEND EXTENSIONS
agg: yes [installing]
tkagg: yes [installing; run-time loading from Python Tcl/Tk]
macosx: no [Mac OS-X only]
OPTIONAL PACKAGE DATA
dlls: no [skipping due to configuration]
Could not locate executable g77
Could not locate executable f77
Could not locate executable ifort
Could not locate executable ifl
Could not locate executable f90
Could not locate executable DF
Could not locate executable efl
Could not locate executable gfortran
Could not locate executable f95
Could not locate executable g95
Could not locate executable efort
Could not locate executable efc
Could not locate executable flang
don't know how to compile Fortran code on platform 'nt'
'svnversion' is not recognized as an internal or external command,
operable program or batch file.
non-existing path in 'numpy\\distutils': 'site.cfg'
Running from numpy source directory.
C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py:418: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates
run_build = parse_setuppy_commands()
C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning:
Optimized (vendor) Blas libraries are not found.
Falls back to netlib Blas library which has worse performance.
A better performance should be easily gained by switching
Blas library.
self.calc_info()
C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
self.calc_info()
C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
self.calc_info()
C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:1712: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
if getattr(self, '_calc_info_{}'.format(lapack))():
C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:1712: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
if getattr(self, '_calc_info_{}'.format(lapack))():
c:\program files\python38\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
Traceback (most recent call last):
File "c:\program files\python38\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "c:\program files\python38\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "c:\program files\python38\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "c:\program files\python38\lib\site-packages\setuptools\command\bdist_egg.py", line 163, in run
self.run_command("egg_info")
File "c:\program files\python38\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\program files\python38\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\egg_info.py", line 26, in run
File "c:\program files\python38\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\program files\python38\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\build_src.py", line 142, in run
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\build_src.py", line 150, in build_sources
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\build_src.py", line 267, in build_py_modules_sources
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\misc_util.py", line 2270, in generate_config_py
File "c:\program files\python38\lib\distutils\dir_util.py", line 70, in mkpath
os.mkdir(head, mode)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 310, in wrap
path = self._remap_input(name, path, *args, **kw)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 452, in _remap_input
self._violation(operation, os.path.realpath(path), *args, **kw)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 407, in _violation
raise SandboxViolation(operation, args, kw)
setuptools.sandbox.SandboxViolation: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {}
The package setup script has attempted to modify files on your system
that are not within the EasyInstall build area, and has been aborted.
This package cannot be safely installed by EasyInstall, and may not
support alternate installation locations even if you run its setup
script by hand. Please inform the package's author and the EasyInstall
maintainers to find out if a fix or workaround is available.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules
yield saved
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context
yield
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 443, in <module>
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 435, in setup_package
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\core.py", line 171, in setup
File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "c:\program files\python38\lib\distutils\core.py", line 163, in setup
raise SystemExit("error: " + str(msg))
SystemExit: error: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {}
The package setup script has attempted to modify files on your system
that are not within the EasyInstall build area, and has been aborted.
This package cannot be safely installed by EasyInstall, and may not
support alternate installation locations even if you run its setup
script by hand. Please inform the package's author and the EasyInstall
maintainers to find out if a fix or workaround is available.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 1144, in run_setup
run_setup(setup_script, args)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 253, in run_setup
raise
File "c:\program files\python38\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context
yield
File "c:\program files\python38\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 166, in save_modules
saved_exc.resume()
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 141, in resume
six.reraise(type, exc, self._tb)
File "c:\program files\python38\lib\site-packages\setuptools\_vendor\six.py", line 685, in reraise
raise value.with_traceback(tb)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules
yield saved
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context
yield
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 443, in <module>
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 435, in setup_package
File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\core.py", line 171, in setup
File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "c:\program files\python38\lib\distutils\core.py", line 163, in setup
raise SystemExit("error: " + str(msg))
SystemExit: error: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {}
The package setup script has attempted to modify files on your system
that are not within the EasyInstall build area, and has been aborted.
This package cannot be safely installed by EasyInstall, and may not
support alternate installation locations even if you run its setup
script by hand. Please inform the package's author and the EasyInstall
maintainers to find out if a fix or workaround is available.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\marci_000\AppData\Local\Temp\pip-install-6ze8b_ec\matplotlib\setup.py", line 262, in <module>
setup(
File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 144, in setup
_install_setup_requires(attrs)
File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 139, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "c:\program files\python38\lib\site-packages\setuptools\dist.py", line 717, in fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File "c:\program files\python38\lib\site-packages\pkg_resources\__init__.py", line 780, in resolve
dist = best[req.key] = env.best_match(
File "c:\program files\python38\lib\site-packages\pkg_resources\__init__.py", line 1065, in best_match
return self.obtain(req, installer)
File "c:\program files\python38\lib\site-packages\pkg_resources\__init__.py", line 1077, in obtain
return installer(requirement)
File "c:\program files\python38\lib\site-packages\setuptools\dist.py", line 787, in fetch_build_egg
return cmd.easy_install(req)
File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 679, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 705, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 890, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 1158, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 1146, in run_setup
raise DistutilsError("Setup script exited with %s" % (v.args[0],))
distutils.errors.DistutilsError: Setup script exited with error: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {}
The package setup script has attempted to modify files on your system
that are not within the EasyInstall build area, and has been aborted.
This package cannot be safely installed by EasyInstall, and may not
support alternate installation locations even if you run its setup
script by hand. Please inform the package's author and the EasyInstall
maintainers to find out if a fix or workaround is available.
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
| [
"Add your python to path and try running command in command prompt.\n",
"try to run\npython -m pip install -U pip\npython -m pip install -U matplotlib\nwhile installing Matplotlib !\n"
] | [
0,
0
] | [] | [] | [
"matplotlib",
"pip",
"python",
"python_3.x"
] | stackoverflow_0058582126_matplotlib_pip_python_python_3.x.txt |
Q:
How to emit data from python socket.io server to angular socket.io client
I need to get real time data in my angular project from a python server (no chat).
I have the angular side setup, but I don't know how to get the python backend working.
angular:
import { Injectable } from '@angular/core';
import { Observable } from 'rxjs';
import {io} from 'socket.io-client';
@Injectable({
providedIn: 'root'
})
export class WebsocketService {
constructor() {
this.socket = io()
}
socket: any;
readonly uri: string = "ws://localhost:3000"
listen(eventName: string) {
return new Observable((subscriber) => {
this.socket.on(eventName, ((data: unknown) => {
subscriber.next(data)
}))
})
}
emit(eventName: string, data:any) {
this.socket.emit(eventName,data)
}
}
How to setup python to send some random numbers to the angular client? Best would be with fastapi
from fastapi import FastAPI
from fastapi_socketio import SocketManager
import uvicorn
import random
app = FastAPI()
sio = SocketManager(app=app)
....no clue here
async def rndm():
while True:
print(random.randint(1, 100)) <- emit some integers to client instead of printing
await asyncio.sleep(1)
uvicorn.run("fastapitest:app", host='0.0.0.0', port=8000,reload=True)
A:
You can use the send method provided by SocketManager to send messages to the client.
Here is an example of how you can modify your code to send random numbers to the client:
from fastapi import FastAPI
from fastapi_socketio import SocketManager
import uvicorn
import random
app = FastAPI()
sio = SocketManager(app=app)
@sio.on('connect')
async def connected(sid):
while True:
random_number = random.randint(1, 100)
print(f"Sending number: {random_number}")
await sio.send(sid, random_number)
await asyncio.sleep(1)
uvicorn.run("fastapitest:app", host='0.0.0.0', port=8000,reload=True)
| How to emit data from python socket.io server to angular socket.io client | I need to get real time data in my angular project from a python server (no chat).
I have the angular side setup, but I don't know how to get the python backend working.
angular:
import { Injectable } from '@angular/core';
import { Observable } from 'rxjs';
import {io} from 'socket.io-client';
@Injectable({
providedIn: 'root'
})
export class WebsocketService {
constructor() {
this.socket = io()
}
socket: any;
readonly uri: string = "ws://localhost:3000"
listen(eventName: string) {
return new Observable((subscriber) => {
this.socket.on(eventName, ((data: unknown) => {
subscriber.next(data)
}))
})
}
emit(eventName: string, data:any) {
this.socket.emit(eventName,data)
}
}
How to setup python to send some random numbers to the angular client? Best would be with fastapi
from fastapi import FastAPI
from fastapi_socketio import SocketManager
import uvicorn
import random
app = FastAPI()
sio = SocketManager(app=app)
....no clue here
async def rndm():
while True:
print(random.randint(1, 100)) <- emit some integers to client instead of printing
await asyncio.sleep(1)
uvicorn.run("fastapitest:app", host='0.0.0.0', port=8000,reload=True)
| [
"You can use the send method provided by SocketManager to send messages to the client.\nHere is an example of how you can modify your code to send random numbers to the client:\nfrom fastapi import FastAPI\nfrom fastapi_socketio import SocketManager\nimport uvicorn\nimport random\n\napp = FastAPI()\nsio = SocketManager(app=app)\n\n@sio.on('connect')\nasync def connected(sid):\n while True:\n random_number = random.randint(1, 100)\n print(f\"Sending number: {random_number}\")\n await sio.send(sid, random_number)\n await asyncio.sleep(1)\n\nuvicorn.run(\"fastapitest:app\", host='0.0.0.0', port=8000,reload=True)\n\n"
] | [
0
] | [] | [] | [
"angular",
"python",
"socket.io"
] | stackoverflow_0074679118_angular_python_socket.io.txt |
Q:
Filter QuerySet from a given list of indexs
I have a list of index i want to extract from another queryset.
>>> allLocation = loc.objects.all()
>>> allLocation
<QuerySet [<loc: loc object (1)>, <loc: loc object (2)>, <loc: loc object (3)>, <loc: loc object (4)>, <loc: loc object (5)>]>
>>> UserIndex = [0,3,4]
>>>
>>> allLocation[UserIndex[1]]
<loc: loc object (4)>
>>> filteredUser = ?
I can query a single item but I want to filter all the index item from my allLocation given the index number in a list ( UserIndex ) and store it in filteredUser
A:
So using chatGPT I found the ans I was looking for
filteredUser = allLocation.filter(id__in=[allLocation[i].id for i in UserIndex])
| Filter QuerySet from a given list of indexs | I have a list of index i want to extract from another queryset.
>>> allLocation = loc.objects.all()
>>> allLocation
<QuerySet [<loc: loc object (1)>, <loc: loc object (2)>, <loc: loc object (3)>, <loc: loc object (4)>, <loc: loc object (5)>]>
>>> UserIndex = [0,3,4]
>>>
>>> allLocation[UserIndex[1]]
<loc: loc object (4)>
>>> filteredUser = ?
I can query a single item but I want to filter all the index item from my allLocation given the index number in a list ( UserIndex ) and store it in filteredUser
| [
"So using chatGPT I found the ans I was looking for\nfilteredUser = allLocation.filter(id__in=[allLocation[i].id for i in UserIndex]) \n"
] | [
0
] | [] | [] | [
"django",
"django_models",
"django_queryset",
"python"
] | stackoverflow_0074348329_django_django_models_django_queryset_python.txt |
Q:
python - Run indented code through keyboard shortcuts in Spyder as in RStudio
I would like to be able to run an indented block of code in python in the same way I do in R. In particular, if in RStudio I have the following indented block of code:
print(seq(from = 1,
to = 10,
by = 1))
I can place the cursor everywhere (at the beginning of the code, in the middle, at the end) except in a new line below and simply press Cmd+Enter (or Ctrl+Enter) and I can run such code.
However, in Spyder 4.2, a similar code like this one:
import pandas as pd
cars = {'Brand': ['Honda', 'Ford','Audi'],
'Price': [20000, 30000, 40000]}
will not run wherever I place the cursor, and I have to select the two lines to create the dataframe and launch the whole selection with Cmd+Enter (I modified the keyboard shortcuts in the preferences of Spyder to run a selection).
Any advice on how to run such code without selecting it first? Thanks!
A:
(Spyder maintainer here) You said
Any advice on how to run such code without selecting it first?
Yes, you need to use cells for that. You can create a cell by inserting a comment that starts with # %%, like this
import pandas as pd
# %%
cars = {'Brand': ['Honda', 'Ford','Audi'],
'Price': [20000, 30000, 40000]}
That will allow you to run the piece of code enclosed by those comments with the keyboard shortcuts Shift + Enter (run current cell and advance to the next one); or Control + Enter (run current cell and stay on it).
If that explanation was not clear enough, you can learn more about cells in our docs.
A:
This would be absolutely fantastic if it worked, but it doesn't. I'm guessing there must be a setting that needs to be changed first that everyone fails to mention
| python - Run indented code through keyboard shortcuts in Spyder as in RStudio | I would like to be able to run an indented block of code in python in the same way I do in R. In particular, if in RStudio I have the following indented block of code:
print(seq(from = 1,
to = 10,
by = 1))
I can place the cursor everywhere (at the beginning of the code, in the middle, at the end) except in a new line below and simply press Cmd+Enter (or Ctrl+Enter) and I can run such code.
However, in Spyder 4.2, a similar code like this one:
import pandas as pd
cars = {'Brand': ['Honda', 'Ford','Audi'],
'Price': [20000, 30000, 40000]}
will not run wherever I place the cursor, and I have to select the two lines to create the dataframe and launch the whole selection with Cmd+Enter (I modified the keyboard shortcuts in the preferences of Spyder to run a selection).
Any advice on how to run such code without selecting it first? Thanks!
| [
"(Spyder maintainer here) You said\n\nAny advice on how to run such code without selecting it first?\n\nYes, you need to use cells for that. You can create a cell by inserting a comment that starts with # %%, like this\nimport pandas as pd\n\n# %%\ncars = {'Brand': ['Honda', 'Ford','Audi'],\n 'Price': [20000, 30000, 40000]}\n\nThat will allow you to run the piece of code enclosed by those comments with the keyboard shortcuts Shift + Enter (run current cell and advance to the next one); or Control + Enter (run current cell and stay on it).\nIf that explanation was not clear enough, you can learn more about cells in our docs.\n",
"This would be absolutely fantastic if it worked, but it doesn't. I'm guessing there must be a setting that needs to be changed first that everyone fails to mention\n"
] | [
1,
0
] | [] | [] | [
"keyboard_shortcuts",
"python",
"r",
"spyder"
] | stackoverflow_0067314850_keyboard_shortcuts_python_r_spyder.txt |
Q:
How to extract exactly the same word with regexp_extract_all in pyspark
I am having some issues in finding the correct regular expression
lets say I have this list of keywords:
keywords = [' b.o.o', ' a.b.a', ' titi']
(please keep in mind that there is a blank space before any keyword and this list can contain up to 100keywords so I can't to it without a function)
and my dataframe df:
enter image description here
I use the following code to extract the matching words, it works partially because it extract even the words that are not an exact match :
keywords = [' b.o.o', ' a.b.a', ' titi']
pattern = '(' + '|'.join([fr'\\b({k})\\b' for k in keywords]) + ')'
df.withColumn('words', F.expr(f"regexp_extract_all(colB, '{pattern}' ,1)))
the output :
enter image description here
But here is the expected output :
enter image description here
As we can see, it does extract words that are not exact match, it does not take into account the dot. For example, this code considers awbwa as a match because if we replace w by a dot it will be a match. I also tried
pattern = '(' + '|'.join([fr'\\b({k})\\b' for k in [re.escape(x) for x in keywords]]) + ')'
to add a backslash before every dot and before the blank space but it doesnt work.
Thank you so much for your help (btw I looked everywhere on stackoverflow and I didnt find an answer to this)
A:
I think you need to add a backslash before the dot in your regular expression pattern to escape it, so it's treated as a literal dot and not a special character that matches any character.
In your code, you can try using the re.escape() method from the re module to escape all special characters in the keywords list before joining them in the pattern. Here's an example:
import re
keywords = [' b.o.o', ' a.b.a', ' titi']
# Escape special characters in the keywords using re.escape()
escaped_keywords = [re.escape(keyword) for keyword in keywords]
# Join the escaped keywords with '|' as the separator
pattern = '(' + '|'.join(escaped_keywords) + ')'
# Use the pattern in your regexp_extract_all() call
df.withColumn('words', F.expr(f"regexp_extract_all(colB, '{pattern}' ,1)"))
This should give you the expected output where only exact matches are extracted.
A:
You can use the \b word boundary metacharacter to match whole words only, and escape the dots with a backslash \. in your regular expression.
Here is an example:
import pyspark.sql.functions as F
keywords = [' b.o.o', ' a.b.a', ' titi']
# Escape dots and add word boundaries
pattern = '(' + '|'.join([fr'\b({k.replace(".", "\\.")})\b' for k in keywords]) + ')'
df.withColumn('words', F.expr(f"regexp_extract_all(colB, '{pattern}' ,1)))
This will match b.o.o, a.b.a, and titi as whole words, and will not match substrings like awbwa.
A:
I finally figure it out, for some reason re.escape doesnt work, the solution was to add [] in between dots. thanks for answering !
| How to extract exactly the same word with regexp_extract_all in pyspark | I am having some issues in finding the correct regular expression
lets say I have this list of keywords:
keywords = [' b.o.o', ' a.b.a', ' titi']
(please keep in mind that there is a blank space before any keyword and this list can contain up to 100keywords so I can't to it without a function)
and my dataframe df:
enter image description here
I use the following code to extract the matching words, it works partially because it extract even the words that are not an exact match :
keywords = [' b.o.o', ' a.b.a', ' titi']
pattern = '(' + '|'.join([fr'\\b({k})\\b' for k in keywords]) + ')'
df.withColumn('words', F.expr(f"regexp_extract_all(colB, '{pattern}' ,1)))
the output :
enter image description here
But here is the expected output :
enter image description here
As we can see, it does extract words that are not exact match, it does not take into account the dot. For example, this code considers awbwa as a match because if we replace w by a dot it will be a match. I also tried
pattern = '(' + '|'.join([fr'\\b({k})\\b' for k in [re.escape(x) for x in keywords]]) + ')'
to add a backslash before every dot and before the blank space but it doesnt work.
Thank you so much for your help (btw I looked everywhere on stackoverflow and I didnt find an answer to this)
| [
"I think you need to add a backslash before the dot in your regular expression pattern to escape it, so it's treated as a literal dot and not a special character that matches any character.\nIn your code, you can try using the re.escape() method from the re module to escape all special characters in the keywords list before joining them in the pattern. Here's an example:\nimport re\n\nkeywords = [' b.o.o', ' a.b.a', ' titi']\n\n# Escape special characters in the keywords using re.escape()\nescaped_keywords = [re.escape(keyword) for keyword in keywords]\n\n# Join the escaped keywords with '|' as the separator\npattern = '(' + '|'.join(escaped_keywords) + ')'\n\n# Use the pattern in your regexp_extract_all() call\ndf.withColumn('words', F.expr(f\"regexp_extract_all(colB, '{pattern}' ,1)\"))\n\n\nThis should give you the expected output where only exact matches are extracted.\n",
"You can use the \\b word boundary metacharacter to match whole words only, and escape the dots with a backslash \\. in your regular expression.\nHere is an example:\nimport pyspark.sql.functions as F\n\nkeywords = [' b.o.o', ' a.b.a', ' titi']\n\n# Escape dots and add word boundaries\npattern = '(' + '|'.join([fr'\\b({k.replace(\".\", \"\\\\.\")})\\b' for k in keywords]) + ')'\n\ndf.withColumn('words', F.expr(f\"regexp_extract_all(colB, '{pattern}' ,1)))\n\nThis will match b.o.o, a.b.a, and titi as whole words, and will not match substrings like awbwa.\n",
"I finally figure it out, for some reason re.escape doesnt work, the solution was to add [] in between dots. thanks for answering !\n"
] | [
0,
0,
0
] | [] | [] | [
"apache_spark",
"extract",
"pyspark",
"python",
"regex"
] | stackoverflow_0074671615_apache_spark_extract_pyspark_python_regex.txt |
Q:
My discord bot is not responding to my commands
import discord
import os
client = discord.Client(intents=discord.Intents.default())
@client.event
async def on_ready():
print("We have logged in as {0.user}".format(client))
@client.event
async def on_message(message):
if message.author == client.user:
return
if message.content.startswith('$hello'):
channel = message.channel
await channel.send('Hello!')
client.run(os.getenv('TOKEN'))
I tried to make a discord bot using discord.py. The bot comes online and everything but does not respond to my messages. Can you tell me what is wrong?
A:
You seem to have an indentation error:
async def on_message(message):
if message.author == client.user:
return
if message.content.startswith('$hello'):
channel = message.channel
await channel.send('Hello!')
The last if-statement is never going to be executed. Instead, move it one indentation back such that:
async def on_message(message):
if message.author == client.user:
return
if message.content.startswith('$hello'):
channel = message.channel
await channel.send('Hello!')
A:
There are a few potential issues with the code that could be causing the Discord bot not to respond to commands.
First, the on_message event handler function is indented incorrectly. The if statement that checks if the message starts with '$hello' should be at the same indentation level as the on_message function, not indented further. This is because the return statement that is indented further will cause the function to return immediately, without checking the if statement or sending a response.
Here is an example of how the on_message function should be indented:
@client.event
async def on_message(message):
if message.author == client.user:
return
if message.content.startswith('$hello'):
channel = message.channel
await channel.send('Hello!')
Another potential issue is that the TOKEN environment variable is not being set. The os.getenv function is used to get the value of the TOKEN environment variable, but if this variable is not set, the os.getenv function will return None, and the client.run function will not be able to authenticate the bot.
To fix this issue, you can set the TOKEN environment variable to the bot token that you obtained from the Discord Developer Portal. This can be done in a variety of ways, depending on your operating system and the method that you prefer to use.
| My discord bot is not responding to my commands | import discord
import os
client = discord.Client(intents=discord.Intents.default())
@client.event
async def on_ready():
print("We have logged in as {0.user}".format(client))
@client.event
async def on_message(message):
if message.author == client.user:
return
if message.content.startswith('$hello'):
channel = message.channel
await channel.send('Hello!')
client.run(os.getenv('TOKEN'))
I tried to make a discord bot using discord.py. The bot comes online and everything but does not respond to my messages. Can you tell me what is wrong?
| [
"You seem to have an indentation error:\nasync def on_message(message):\n if message.author == client.user:\n return\n\n if message.content.startswith('$hello'):\n channel = message.channel\n await channel.send('Hello!')\n\nThe last if-statement is never going to be executed. Instead, move it one indentation back such that:\nasync def on_message(message):\n if message.author == client.user:\n return\n\n if message.content.startswith('$hello'):\n channel = message.channel\n await channel.send('Hello!')\n\n",
"There are a few potential issues with the code that could be causing the Discord bot not to respond to commands.\nFirst, the on_message event handler function is indented incorrectly. The if statement that checks if the message starts with '$hello' should be at the same indentation level as the on_message function, not indented further. This is because the return statement that is indented further will cause the function to return immediately, without checking the if statement or sending a response.\nHere is an example of how the on_message function should be indented:\n@client.event\nasync def on_message(message): \nif message.author == client.user:\n return\n\nif message.content.startswith('$hello'):\n channel = message.channel\n await channel.send('Hello!')\n\nAnother potential issue is that the TOKEN environment variable is not being set. The os.getenv function is used to get the value of the TOKEN environment variable, but if this variable is not set, the os.getenv function will return None, and the client.run function will not be able to authenticate the bot.\nTo fix this issue, you can set the TOKEN environment variable to the bot token that you obtained from the Discord Developer Portal. This can be done in a variety of ways, depending on your operating system and the method that you prefer to use.\n"
] | [
0,
0
] | [] | [] | [
"discord.py",
"python"
] | stackoverflow_0074679033_discord.py_python.txt |
Q:
mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu
When I go to run a python code directly through the terminal it gives me this error, I've already tried to reinstall numpy and it didn't work!
And I tried to install mlk service returns the same error. Can someone help me ?
UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu
OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service
from . import _distributor_init
Traceback (most recent call last):
File "c:\Users\teste.user\Desktop\Project-python\teste.py", line 4, in <module>
import pandas as pd
File "C:\Users\teste.user\Anaconda3\lib\site-packages\pandas\__init__.py", line 16, in <module>
raise ImportError(
ImportError: Unable to import required dependencies:
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.9 from "C:\Users\teste.user\Anaconda3\python.exe"
* The NumPy version is: "1.21.5"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found.
A:
Can be solved by resetting package configuration by force reinstall of numpy.
conda install numpy --force-reinstall
A:
I was able to fix it by running the following commands to uninstall and reinstall the packages
pip uninstall matplotlib
pip uninstall pillow
pip uninstall numpy
pip install matplotlib
pip install pillow
pip install numpy
A:
Thanks for help! I used this:
I was able to fix it by running the following commands to uninstall and reinstall the packages
pip uninstall matplotlib
pip uninstall pillow
pip uninstall numpy
pip install matplotlib
pip install pillow
pip install numpy
| mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu | When I go to run a python code directly through the terminal it gives me this error, I've already tried to reinstall numpy and it didn't work!
And I tried to install mlk service returns the same error. Can someone help me ?
UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu
OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service
from . import _distributor_init
Traceback (most recent call last):
File "c:\Users\teste.user\Desktop\Project-python\teste.py", line 4, in <module>
import pandas as pd
File "C:\Users\teste.user\Anaconda3\lib\site-packages\pandas\__init__.py", line 16, in <module>
raise ImportError(
ImportError: Unable to import required dependencies:
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.9 from "C:\Users\teste.user\Anaconda3\python.exe"
* The NumPy version is: "1.21.5"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found.
| [
"Can be solved by resetting package configuration by force reinstall of numpy.\nconda install numpy --force-reinstall\n\n",
"I was able to fix it by running the following commands to uninstall and reinstall the packages\npip uninstall matplotlib\npip uninstall pillow\npip uninstall numpy\npip install matplotlib\npip install pillow\npip install numpy\n\n",
"Thanks for help! I used this:\nI was able to fix it by running the following commands to uninstall and reinstall the packages\npip uninstall matplotlib\npip uninstall pillow\npip uninstall numpy\npip install matplotlib\npip install pillow\npip install numpy\n"
] | [
4,
3,
0
] | [] | [] | [
"jupyter_notebook",
"python"
] | stackoverflow_0072858984_jupyter_notebook_python.txt |
Q:
Gtts library error. I don't know why this error are happening or how to fix them
I am tried to convert pdf to an audio file but when ever I run my code I get a bunch error from the gtts liberary. If there is a better liberary to use that does not sound like a robot please let me know the errors are https://pastebin.com/Uwnq1MgS and my code is
#Importing Libraries
#Importing Google Text to Speech library
from gtts import gTTS
#Importing PDF reader PyPDF2
import PyPDF2
#Open file Path
pdf_File = open('simple.pdf', 'rb')
#Create PDF Reader Object
pdf_Reader = PyPDF2.PdfFileReader(pdf_File)
count = pdf_Reader.numPages # counts number of pages in pdf
textList = []
#Extracting text data from each page of the pdf file
for i in range(count):
try:
page = pdf_Reader.getPage(i)
textList.append(page.extractText())
except:
pass
#Converting multiline text to single line text
textString = " ".join(textList)
print(textString)
#Set language to english (en)
language = 'en'
#Call GTTS
myAudio = gTTS(text=textString, lang=language, slow=False)
#Save as mp3 file
myAudio.save("Audio.mp3")
Can anyone help me?
I have tried nothing because I could not find anything on this errors.
A:
It looks like the issue is with the PyPDF2 library. The getPage() method is not able to extract the text from some pages in the PDF file, resulting in an error.
One solution could be to use the PyMuPDF library instead, which is a more powerful PDF manipulation library. You can install it using the following command:
pip install PyMuPDF
You can then use the text() method from the PyMuPDF library to extract the text from each page in the PDF file. Here is an example of how your code could look like using the PyMuPDF library:
# Importing Libraries
# Importing Google Text to Speech library
from gtts import gTTS
# Importing PDF reader PyMuPDF
import fitz
# Open file Path
pdf_File = open('simple.pdf', 'rb')
# Create PDF Reader Object
pdf_Reader = fitz.open(pdf_File)
count = pdf_Reader.page_count # counts number of pages in pdf
textList = []
# Extracting text data from each page of the pdf file
for i in range(count):
page = pdf_Reader[i]
textList.append(page.get_text('text'))
# Converting multiline text to single line text
textString = " ".join(textList)
print(textString)
# Set language to english (en)
language = 'en'
# Call GTTS
myAudio = gTTS(text=textString, lang=language, slow=False)
# Save as mp3 file
myAudio.save("Audio.mp3")
This should fix the errors you were getting and allow you to extract the text from all pages in the PDF file.
| Gtts library error. I don't know why this error are happening or how to fix them | I am tried to convert pdf to an audio file but when ever I run my code I get a bunch error from the gtts liberary. If there is a better liberary to use that does not sound like a robot please let me know the errors are https://pastebin.com/Uwnq1MgS and my code is
#Importing Libraries
#Importing Google Text to Speech library
from gtts import gTTS
#Importing PDF reader PyPDF2
import PyPDF2
#Open file Path
pdf_File = open('simple.pdf', 'rb')
#Create PDF Reader Object
pdf_Reader = PyPDF2.PdfFileReader(pdf_File)
count = pdf_Reader.numPages # counts number of pages in pdf
textList = []
#Extracting text data from each page of the pdf file
for i in range(count):
try:
page = pdf_Reader.getPage(i)
textList.append(page.extractText())
except:
pass
#Converting multiline text to single line text
textString = " ".join(textList)
print(textString)
#Set language to english (en)
language = 'en'
#Call GTTS
myAudio = gTTS(text=textString, lang=language, slow=False)
#Save as mp3 file
myAudio.save("Audio.mp3")
Can anyone help me?
I have tried nothing because I could not find anything on this errors.
| [
"It looks like the issue is with the PyPDF2 library. The getPage() method is not able to extract the text from some pages in the PDF file, resulting in an error.\nOne solution could be to use the PyMuPDF library instead, which is a more powerful PDF manipulation library. You can install it using the following command:\npip install PyMuPDF\n\nYou can then use the text() method from the PyMuPDF library to extract the text from each page in the PDF file. Here is an example of how your code could look like using the PyMuPDF library:\n# Importing Libraries\n# Importing Google Text to Speech library\nfrom gtts import gTTS\n\n# Importing PDF reader PyMuPDF\nimport fitz\n\n# Open file Path\npdf_File = open('simple.pdf', 'rb')\n\n# Create PDF Reader Object\npdf_Reader = fitz.open(pdf_File)\ncount = pdf_Reader.page_count # counts number of pages in pdf\ntextList = []\n\n# Extracting text data from each page of the pdf file\nfor i in range(count):\n page = pdf_Reader[i]\n textList.append(page.get_text('text'))\n\n# Converting multiline text to single line text\ntextString = \" \".join(textList)\n\nprint(textString)\n\n# Set language to english (en)\nlanguage = 'en'\n\n# Call GTTS\nmyAudio = gTTS(text=textString, lang=language, slow=False)\n\n# Save as mp3 file\nmyAudio.save(\"Audio.mp3\")\n\nThis should fix the errors you were getting and allow you to extract the text from all pages in the PDF file.\n"
] | [
0
] | [] | [] | [
"gtts",
"python"
] | stackoverflow_0074679139_gtts_python.txt |
Q:
NotImplementedError: Conversion 'rpy2py' not defined for objects of type '' only after I run the code twice
If I run the following code once it works.
import numpy as np
import rpy2.robjects as robjects
x = np.linspace(0, 1, num = 11, endpoint=True)
y = np.array([-1,1,1, -1,1,0, .5,.5,.4, .5, -1])
r_x = robjects.FloatVector(x)
r_y = robjects.FloatVector(y)
r_smooth_spline = robjects.r['smooth.spline'] #extract R function
spline_xy = r_smooth_spline(x=r_x, y=r_y)
print('x =', x)
print('ysplined =',np.array(robjects.r['predict'](spline_xy,robjects.FloatVector(x)).rx2('y')))
If I run this cell twice in a Jupyter notebook, I obtain the following error message:
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-2-5efeb940cd16> in <module>
6 r_x = robjects.FloatVector(x)
7 r_y = robjects.FloatVector(y)
----> 8 r_smooth_spline = robjects.r['smooth.spline'] #extract R function
9 spline_xy = r_smooth_spline(x=r_x, y=r_y)
10 print('x =', x)
2 frames
/usr/local/lib/python3.8/dist-packages/rpy2/robjects/conversion.py in _rpy2py(obj)
250 non-rpy2) objects.
251 """
--> 252 raise NotImplementedError(
253 "Conversion 'rpy2py' not defined for objects of type '%s'" %
254 str(type(obj))
NotImplementedError: Conversion 'rpy2py' not defined for objects of type '<class 'rpy2.rinterface.SexpClosure'>'
This code always used to run without problems multiple times. Probably a new version of python or rpy2 is the problem? How can I fix the problem such that I am able to run this code multiple times within one Jupyter notebook.
A:
The easiest fix is to run once:
!pip install -Iv rpy2==3.4.2
at the start of the Jupyter-notebook in order to rollback to version 3.4.2, where this problem did not occur (see Rpy2 Error depends on execution method: NotImplementedError: Conversion "rpy2py" not defined). For more information how to cahge the version of a python package see Rollback to specific version of a python package in Goolge Colab and Installing specific package version with pip)
It would still be interesting to understand how one could use the latest version of rpy2 correctly.
A:
This is cause by an issue in older releases of ipykernel. I'd recommend to upgrade it rather than downgrade rpy2.
See https://github.com/rpy2/rpy2/issues/952
| NotImplementedError: Conversion 'rpy2py' not defined for objects of type '' only after I run the code twice | If I run the following code once it works.
import numpy as np
import rpy2.robjects as robjects
x = np.linspace(0, 1, num = 11, endpoint=True)
y = np.array([-1,1,1, -1,1,0, .5,.5,.4, .5, -1])
r_x = robjects.FloatVector(x)
r_y = robjects.FloatVector(y)
r_smooth_spline = robjects.r['smooth.spline'] #extract R function
spline_xy = r_smooth_spline(x=r_x, y=r_y)
print('x =', x)
print('ysplined =',np.array(robjects.r['predict'](spline_xy,robjects.FloatVector(x)).rx2('y')))
If I run this cell twice in a Jupyter notebook, I obtain the following error message:
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-2-5efeb940cd16> in <module>
6 r_x = robjects.FloatVector(x)
7 r_y = robjects.FloatVector(y)
----> 8 r_smooth_spline = robjects.r['smooth.spline'] #extract R function
9 spline_xy = r_smooth_spline(x=r_x, y=r_y)
10 print('x =', x)
2 frames
/usr/local/lib/python3.8/dist-packages/rpy2/robjects/conversion.py in _rpy2py(obj)
250 non-rpy2) objects.
251 """
--> 252 raise NotImplementedError(
253 "Conversion 'rpy2py' not defined for objects of type '%s'" %
254 str(type(obj))
NotImplementedError: Conversion 'rpy2py' not defined for objects of type '<class 'rpy2.rinterface.SexpClosure'>'
This code always used to run without problems multiple times. Probably a new version of python or rpy2 is the problem? How can I fix the problem such that I am able to run this code multiple times within one Jupyter notebook.
| [
"The easiest fix is to run once:\n!pip install -Iv rpy2==3.4.2\n\nat the start of the Jupyter-notebook in order to rollback to version 3.4.2, where this problem did not occur (see Rpy2 Error depends on execution method: NotImplementedError: Conversion \"rpy2py\" not defined). For more information how to cahge the version of a python package see Rollback to specific version of a python package in Goolge Colab and Installing specific package version with pip)\nIt would still be interesting to understand how one could use the latest version of rpy2 correctly.\n",
"This is cause by an issue in older releases of ipykernel. I'd recommend to upgrade it rather than downgrade rpy2.\nSee https://github.com/rpy2/rpy2/issues/952\n"
] | [
0,
0
] | [] | [] | [
"jupyter_notebook",
"python",
"rpy2"
] | stackoverflow_0074678378_jupyter_notebook_python_rpy2.txt |
Q:
Cannot able to run cqlsh due to python attribute error
Cannot able to execute the command cqlsh in mac m1 based system.
% bin/cqlsh
Traceback (most recent call last):
File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/cqlsh.py", line 159, in <module>
from cqlshlib import cql3handling, cqlhandling, pylexotron, sslhandling, cqlshhandling
File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/cql3handling.py", line 19, in <module>
from cqlshlib.cqlhandling import CqlParsingRuleSet, Hint
File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/cqlhandling.py", line 23, in <module>
from cqlshlib import pylexotron, util
File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/pylexotron.py", line 342, in <module>
class ParsingRuleSet:
File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/pylexotron.py", line 343, in ParsingRuleSet
RuleSpecScanner = SaferScanner([
^^^^^^^^^^^^^^
File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/saferscanner.py", line 91, in __init__
s = re.sre_parse.State()
^^^^^^^^^^^^
AttributeError: module 're' has no attribute 'sre_parse'
A:
Looks like there may have been a breaking change introduced to Python's synchronized regex engine (SRE) with Python 3.11. I have created a ticket for this on the Cassandra project (CASSANDRA-18088).
In the interim, downgrade your local Python to 3.10, and you should be fine.
| Cannot able to run cqlsh due to python attribute error | Cannot able to execute the command cqlsh in mac m1 based system.
% bin/cqlsh
Traceback (most recent call last):
File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/cqlsh.py", line 159, in <module>
from cqlshlib import cql3handling, cqlhandling, pylexotron, sslhandling, cqlshhandling
File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/cql3handling.py", line 19, in <module>
from cqlshlib.cqlhandling import CqlParsingRuleSet, Hint
File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/cqlhandling.py", line 23, in <module>
from cqlshlib import pylexotron, util
File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/pylexotron.py", line 342, in <module>
class ParsingRuleSet:
File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/pylexotron.py", line 343, in ParsingRuleSet
RuleSpecScanner = SaferScanner([
^^^^^^^^^^^^^^
File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/saferscanner.py", line 91, in __init__
s = re.sre_parse.State()
^^^^^^^^^^^^
AttributeError: module 're' has no attribute 'sre_parse'
| [
"Looks like there may have been a breaking change introduced to Python's synchronized regex engine (SRE) with Python 3.11. I have created a ticket for this on the Cassandra project (CASSANDRA-18088).\nIn the interim, downgrade your local Python to 3.10, and you should be fine.\n"
] | [
0
] | [] | [] | [
"cassandra",
"cassandra_4.0",
"cqlsh",
"python"
] | stackoverflow_0074673247_cassandra_cassandra_4.0_cqlsh_python.txt |
Q:
Add a column in my dataframe based on the name of Excel file
I want to import many excel files into one single dataframe and I want a column where all the rows are the same as the original excel file name in python
this is what i have tried
df_final=df_final.assign(Année='2021')
df_final=df_final.assign(Mois='Octobre')
But I am obliged each time to imort a single excel file add these two columns and then move on to the next one.
How can i automate this into one function ?
A:
in order to add a value to each dataframe based on the filename you need to create a list of values equal to the number of rows. Below is a simple example assuming the dataframes are the same.
Each sample file I have created looks like this:
file1.xlsx
Some Data
0 5
1 3
2 2
3 3
4 6
5 5
file2.xlsx
Some Data
0 6
1 8
2 5
3 4
4 5
5 9
Example code:
import pandas as pd, os
source_folder = r"\PATH\TO\FILES"
# Create empty list for the dataframes
df_list = []
# Loop though each file in the folder
for file in os.listdir(source_folder):
# Create full file path
file_path = os.path.join(source_folder, file)
# Create dataframe
x_df = pd.read_excel(file_path)
# Create new dataframe column for filename based on the number of rows in dataframe
x_df["filename"] = [file for _ in range(x_df.shape[0])]
# Add dataframe to the list
df_list.append(x_df)
# Concatonate the list of dataframes to a single dataframe
final_df = pd.concat(df_list)
print(final_df)
The final result is:
Some Data filename
0 5 file1.xlsx
1 3 file1.xlsx
2 2 file1.xlsx
3 3 file1.xlsx
4 6 file1.xlsx
5 5 file1.xlsx
0 6 file2.xlsx
1 8 file2.xlsx
2 5 file2.xlsx
3 4 file2.xlsx
4 5 file2.xlsx
5 9 file2.xlsx
| Add a column in my dataframe based on the name of Excel file | I want to import many excel files into one single dataframe and I want a column where all the rows are the same as the original excel file name in python
this is what i have tried
df_final=df_final.assign(Année='2021')
df_final=df_final.assign(Mois='Octobre')
But I am obliged each time to imort a single excel file add these two columns and then move on to the next one.
How can i automate this into one function ?
| [
"in order to add a value to each dataframe based on the filename you need to create a list of values equal to the number of rows. Below is a simple example assuming the dataframes are the same.\nEach sample file I have created looks like this:\nfile1.xlsx\n Some Data\n0 5\n1 3\n2 2\n3 3\n4 6\n5 5\n\nfile2.xlsx\n Some Data\n0 6\n1 8\n2 5\n3 4\n4 5\n5 9\n\nExample code:\nimport pandas as pd, os\n\nsource_folder = r\"\\PATH\\TO\\FILES\"\n\n# Create empty list for the dataframes\ndf_list = []\n\n# Loop though each file in the folder\nfor file in os.listdir(source_folder):\n\n # Create full file path\n file_path = os.path.join(source_folder, file)\n\n # Create dataframe\n x_df = pd.read_excel(file_path)\n\n # Create new dataframe column for filename based on the number of rows in dataframe\n x_df[\"filename\"] = [file for _ in range(x_df.shape[0])]\n\n # Add dataframe to the list\n df_list.append(x_df)\n\n# Concatonate the list of dataframes to a single dataframe\nfinal_df = pd.concat(df_list)\n\nprint(final_df)\n\nThe final result is:\n Some Data filename\n0 5 file1.xlsx\n1 3 file1.xlsx\n2 2 file1.xlsx\n3 3 file1.xlsx\n4 6 file1.xlsx\n5 5 file1.xlsx\n0 6 file2.xlsx\n1 8 file2.xlsx\n2 5 file2.xlsx\n3 4 file2.xlsx\n4 5 file2.xlsx\n5 9 file2.xlsx\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074677969_python.txt |