qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
57,624,355 | I deploy a Python app to Google Cloud Functions and got this very vague error message:
```
$ gcloud functions deploy parking_photo --runtime python37 --trigger-http
Deploying function (may take a while - up to 2 minutes)...failed.
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message: 'main'
```
I don't know what is wrong. Searching around gives no result. Anyone can help?
I believe my code layout is correct:
```
$ tree
.
βββ main.py
βββ poetry.lock
βββ pyproject.toml
βββ README.rst
βββ requirements.txt
βββ tests
βββ __init__.py
βββ test_burrowingowl.py
```
My `main.py` file has a function that matches the function name:
```py
import operator
from datetime import datetime
import logbook
from flask import Request, abort, redirect
from pydantic import ValidationError
from pydantic.dataclasses import dataclass
from google.cloud import storage
from pytz import timezone
logger = logbook.Logger(__name__)
storage_client = storage.Client()
@dataclass
class Form:
bucket: str = ...
parkinglot: str = ...
space_id: int = ...
tz: str = ...
def parking_photo(request: Request):
# Some code
return
```
### Update
Thank you for the answers. This topic is out of my sight, when I didn't receive notification from StackOverflow for a while.
Last year, I fixed it by just dropping use of `dataclass`. At that time, Google claimed to support Python 3.7 but actually not, that is why `dataclass` didn't work.
When you tried to reproduce this issue, maybe Google already fix the Python 3.7 compatibility. | 2019/08/23 | [
"https://Stackoverflow.com/questions/57624355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/502780/"
] | Most likely your function is raising a `FileNotFound` error, and Cloud Functions interprets this as `main.py` not existing. A minimal example that will cause the same error:
```
$ cat main.py
with open('missing.file'):
pass
def test(request):
return 'Hello World!'
```
You should check to make sure that any files you're trying to open are included with your function. You can `try`/`except` for this error and log a message to figure it out as well. | Iβve tried to reproduce the error that you are describing by deploying a new Cloud Function without any function with the name of the CF and I got the following error:
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message: File main.py is expected to contain a function named function-test
I think itβs a similar one, since the error code is the same. My function was deployed with an error and later I edited the name by the Cloud Console, and then I got a correct deployment.
I would suggest to validate the function to be called in the Cloud Console and validate itβs correctly set.
Another approach would be to use the [--entry\_point](https://cloud.google.com/sdk/gcloud/reference/functions/deploy#--entry-point) parameter indicating the name of the function to be executed. | 455 |
62,579,243 | I know my question has a lot of answers on the internet but it's seems i can't find a good answer for it, so i will try to explain what i have and hope for the best,
so what i'm trying to do is reading a big json file that might be has more complex structure "nested objects with big arrays" than this but for simple example:
```
{
"data": {
"time": [
1,
2,
3,
4,
5,
...
],
"values": [
1,
2,
3,
4,
6,
...
]
}
}
```
this file might be 200M or more, and i'm using `file_get_contents()` and `json_decode()` to read the data from the file,
then i put the result in variable and loop over the time and take the time value with the current index to get the corresponding value by index form the values array, then save the time and the value in the database but this taking so much CPU and Memory, is their a better way to do this
a better functions to use, a better json structure to use, or maybe a better data format than json to do this
my code:
```
$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);
foreach(data["time"] as $timeIndex => timeValue) {
saveInDataBase(timeValue, data["values"][timeIndex])
}
```
thanks in advance for any help
**Update 06/29/2020:**
i have another more complex json structure example
```
{
"data": {
"set_1": {
"sub_set_1": {
"info_1": {
"details_1": {
"data_1": [1,2,3,4,5,...],
"data_2": [1,2,3,4,5,...],
"data_3": [1,2,3,4,5,...],
"data_4": [1,2,3,4,5,...],
"data_5": 10254552
},
"details_2": [
[1,2,3,4,5,...],
[1,2,3,4,5,...],
[1,2,3,4,5,...],
]
},
"info_2": {
"details_1": {
"data_1": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_2": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_5": {
"text": "some text"
}
},
"details_2": [1,2,3,4,5,...]
}
}, ...
}, ...
}
}
```
the file size might be around 500MB or More and the arrays inside this json file might have around 100MB of data or more.
and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
is python for example more suitable for handling this big data with more efficient than php ?
please if you can provide a detailed answer i think it will be much help for every one that looking to do this big data stuff with php. | 2020/06/25 | [
"https://Stackoverflow.com/questions/62579243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440284/"
] | >
> and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
>
>
>
It's plain text JSON and you have no indexes, so it's impossible to parse your data without iterating it line-by-line. The solution is to serialize your data once and for all and store it in a database (I'm thinking SQLite for fast setup).
If you mandatory can't store your data in a database, or can't retrieve it in SQLite format, you have no other choice but to create a [queue job](https://laravel.com/docs/7.x/queues) which will parse it in time. | **Try Reducing You Bulk Data Complexity For Faster File I/O**
JSON is a great format to store data in, but it comes at the cost of needing to read the entire file to parse it.
Making your data structure simpler but more spread out across several files can allow you to read a file line-by-line which is much faster than all-at-once. This also comes with the benefit of not needing to store the entire file in RAM all at once, so it is more friendly to resource-limited enviroments.
**This might look something like this:**
objects.json
```
{
"data": {
"times_file": "/some/path/objects/object-123/object-123-times.csv",
"values_file": "/some/path/objects/object-123/object-123-times.csv"
}
}
```
object-123-times.csv
```
1
2
3
4
...
```
This would allow you to store your bulk data in a simpler but easier to access format. You could then use something like [`fgetcsv()`](https://www.php.net/manual/en/function.fgetcsv.php) to parse each line. | 457 |
39,942,061 | I'm having a weird problem with a piece of python code.
The idea how it should work:
1. a barcode is entered (now hardcode for the moment);
2. barcode is looked up in local mysqldb, if not found, the barcode is looked up via api from datakick, if it's not found there either, step 3
3. i want to add the barcode to my local mysqldatabase and request some input.
Now the problem: it works! als long as you fill in numbers for the `naamProduct`. If you use letters (eg. I filled in Bla as productname), I get a weird SQL-error `(_mysql_exceptions.OperationalError: (1054, "Unknown column 'Bla' in 'field.list'")`
I have checked the tables in mysql and the types are all ok. The table where name should end up in is text. I have also tried a hardcoded string which works fine. Using the sql-query from the mysql console also works perfectly. My guess is something is going wrong with the inputpart, but I can't figure out what.
(code is still not really tidy with the exceptions, I know ;) Working on it step by step)
`
```
def barcodeFunctie(sql):
con = mdb.connect ('localhost', 'python', 'python', 'stock')
cur = con.cursor()
cur.execute(sql)
ver = cur.fetchone();
con.commit()
con.close()
return ver
#barcode = '8710624957278'
#barcode = '2147483647'
barcode = '123'
#zoeken op barcode. Barcode is ook de sleutel in de tabel.
sql = "select * from Voorraad where Id=%s" % barcode
if barcodeFunctie(sql) == "None":
print "geen output"
else:
try:
url='https://www.datakick.org/api/items/'+barcode
data = json.load(urllib2.urlopen(url))
print data['brand_name'], data['name']
except:
#barcode komt niet voor in eigen db en niet in db van datakick, in beide toevoegen
print barcode, " barcode als input"
naamProduct = str(raw_input("Wat is de naam van het product? "))
hoeveelheidProduct = raw_input("Hoeveel inhoud heeft het product? ")
sql = "insert into Voorraad (Id, NaamProduct,HoeveelHeidProduct) values (%s,%s,%s)" % (barcode, naamProduct, hoeveelheidProduct)
barcodeFunctie(sql)
print "meuktoegevoegd! :D"
```
` | 2016/10/09 | [
"https://Stackoverflow.com/questions/39942061",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6944323/"
] | `UNORDERED` essentially means that the collector is both associative (required by the spec) and commutative (not required).
Associativity allows splitting the computation into subparts and then combining them into the full result, but requires the combining step to be strictly ordered. Examine this snippet from the [docs](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Collector.html):
```
A a2 = supplier.get();
accumulator.accept(a2, t1);
A a3 = supplier.get();
accumulator.accept(a3, t2);
R r2 = finisher.apply(combiner.apply(a2, a3)); // result with splitting
```
In the last step, `combiner.apply(a2, a3)`, the arguments must appear in exactly this order, which means that the entire computation pipeline must track the order and respect it in the end.
Another way of saying this is that the tree we get from recursive splitting must be ordered.
On the other hand, if the combining operation is commutative, we can combine any subpart with any other, in no particular order, and always obtain the same result. Clearly this leads to many optimization opportunities in both space and time dimensions.
It should be noted that there are `UNORDERED` collectors in the JDK which don't guarantee commutativity. The main category are the "higher-order" collectors which are composed with other downstream collectors, but they don't enforce the `UNORDERED` property on them. | The inner `Collector.Characteristics` class itself is fairly terse in its description, but if you spend a few seconds exploring the context you will notice that the containing [Collector](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Collector.html) interface provides additional information
>
> For collectors that do not have the UNORDERED characteristic, two accumulated results a1 and a2 are equivalent if finisher.apply(a1).equals(finisher.apply(a2)). For unordered collectors, equivalence is relaxed to allow for non-equality related to differences in order. (For example, an unordered collector that accumulated elements to a List would consider two lists equivalent if they contained the same elements, ignoring order.)
>
>
>
---
>
> In OpenJDK looks like reducing operations (min, sum, avg) have empty characteristics, I expected to find there at least CONCURRENT and UNORDERED.
>
>
>
At least for doubles summation and averages definitely are ordered and not concurrent because the summation logic uses subresult merging, not a thread-safe accumulator. | 467 |
15,526,996 | After installing the latest [Mac OSX 64-bit Anaconda Python distribution](http://continuum.io/downloads.html), I keep getting a ValueError when trying to start the IPython Notebook.
Starting ipython works fine:
```
3-millerc-~:ipython
Python 2.7.3 |Anaconda 1.4.0 (x86_64)| (default, Feb 25 2013, 18:45:56)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
```
But starting ipython notebook:
```
4-millerc-~:ipython notebook
```
Results in the ValueError (with traceback):
```
Traceback (most recent call last):
File "/Users/millerc/anaconda/bin/ipython", line 7, in <module>
launch_new_instance()
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 388, in launch_new_instance
app.initialize()
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 313, in initialize
super(TerminalIPythonApp, self).initialize(argv)
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/core/application.py", line 325, in initialize
self.parse_command_line(argv)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 308, in parse_command_line
return super(TerminalIPythonApp, self).parse_command_line(argv)
File "<string>", line 2, in parse_command_line
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 420, in parse_command_line
return self.initialize_subcommand(subc, subargv)
File "<string>", line 2, in initialize_subcommand
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 352, in initialize_subcommand
subapp = import_item(subapp)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/utils/importstring.py", line 40, in import_item
module = __import__(package,fromlist=[obj])
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/notebookapp.py", line 46, in <module>
from .handlers import (LoginHandler, LogoutHandler,
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/handlers.py", line 36, in <module>
from docutils.core import publish_string
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/core.py", line 20, in <module>
from docutils import frontend, io, utils, readers, writers
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/frontend.py", line 41, in <module>
import docutils.utils
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/__init__.py", line 20, in <module>
from docutils.io import FileOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/io.py", line 18, in <module>
from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/error_reporting.py", line 47, in <module>
locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1]
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 503, in getdefaultlocale
return _parse_localename(localename)
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 435, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
```
Running the `locale` command from the terminal:
```
5-millerc-~:locale
LANG=
LC_COLLATE="C"
LC_CTYPE="UTF-8"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
``` | 2013/03/20 | [
"https://Stackoverflow.com/questions/15526996",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/655733/"
] | I summarize here the solution to be found on: <http://blog.lobraun.de/2009/04/11/mercurial-on-mac-os-x-valueerror-unknown-locale-utf-8/>
I added these lines to my `.bash_profile`:
```
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
```
I reloaded the profile:
```
source ~/.bash_profile
```
I then ran `ipython` again:
```
ipython notebook
```
Changing locales
----------------
The above will work for the English language in a US locale. One may want different settings.
At the risk of stating the obvious, to discover the current settings for your system, use:
```
$ locale
```
And to retrieve a list of all valid settings on your system:
```
$ locale -a
```
Then choose your preferred locale. For example, for a Swiss French locale, the solution would look like this:
```
export LC_ALL=fr_CH.UTF-8
export LANG=fr_CH.UTF-8
``` | As your `LC_CTYPE` is wrong, you should find out where that wrong value is set and change it to something like `en_US.UTF-8`. | 469 |
26,005,454 | I am creating a fast method of generating a list of primes in the range(0, limit+1). In the function I end up removing all integers in the list named removable from the list named primes. I am looking for a fast and pythonic way of removing the integers, knowing that both lists are always sorted.
I might be wrong, but I believe list.remove(n) iterates over the list comparing each element with n. meaning that the following code runs in O(n^2) time.
```
# removable and primes are both sorted lists of integers
for composite in removable:
primes.remove(composite)
```
Based off my assumption (which could be wrong and please confirm whether or not this is correct) and the fact that both lists are always sorted, I would think that the following code runs faster, since it only loops over the list once for a O(n) time. However, it is not at all pythonic or clean.
```
i = 0
j = 0
while i < len(primes) and j < len(removable):
if primes[i] == removable[j]:
primes = primes[:i] + primes[i+1:]
j += 1
else:
i += 1
```
Is there perhaps a built in function or simpler way of doing this? And what is the fastest way?
Side notes: I have not actually timed the functions or code above. Also, it doesn't matter if the list removable is changed/destroyed in the process.
For anyone interested the full functions is below:
```
import math
# returns a list of primes in range(0, limit+1)
def fastPrimeList(limit):
if limit < 2:
return list()
sqrtLimit = int(math.ceil(math.sqrt(limit)))
primes = [2] + range(3, limit+1, 2)
index = 1
while primes[index] <= sqrtLimit:
removable = list()
index2 = index
while primes[index] * primes[index2] <= limit:
composite = primes[index] * primes[index2]
removable.append(composite)
index2 += 1
for composite in removable:
primes.remove(composite)
index += 1
return primes
``` | 2014/09/23 | [
"https://Stackoverflow.com/questions/26005454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3987360/"
] | This is quite fast and clean, it does `O(n)` set membership checks, and in amortized time it runs in `O(n)` (first line is `O(n)` amortized, second line is `O(n * 1)` amortized, because a membership check is `O(1)` amortized):
```
removable_set = set(removable)
primes = [p for p in primes if p not in removable_set]
```
Here is the modification of your 2nd solution. It does `O(n)` basic operations (worst case):
```
tmp = []
i = j = 0
while i < len(primes) and j < len(removable):
if primes[i] < removable[j]:
tmp.append(primes[i])
i += 1
elif primes[i] == removable[j]:
i += 1
else:
j += 1
primes[:i] = tmp
del tmp
```
Please note that constants also matter. The Python interpreter is quite slow (i.e. with a large constant) to execute Python code. The 2nd solution has lots of Python code, and it can indeed be slower for small practical values of n than the solution with `set`s, because the `set` operations are implemented in C, thus they are fast (i.e. with a small constant).
If you have multiple working solutions, run them on typical input sizes, and measure the time. You may get surprised about their relative speed, often it is not what you would predict. | The most important thing here is to remove the quadratic behavior. You have this for two reasons.
First, calling `remove` searches the entire list for values to remove. Doing this takes linear time, and you're doing it once for each element in `removable`, so your total time is `O(NM)` (where `N` is the length of `primes` and `M` is the length of `removable`).
Second, removing elements from the middle of a list forces you to shift the whole rest of the list up one slot. So, each one takes linear time, and again you're doing it `M` times, so again it's `O(NM)`.
---
How can you avoid these?
For the first, you either need to take advantage of the sorting, or just use something that allows you to do constant-time lookups instead of linear-time, like a `set`.
For the second, you either need to create a list of indices to delete and then do a second pass to move each element up the appropriate number of indices all at once, or just build a new list instead of trying to mutate the original in-place.
So, there are a variety of options here. Which one is best? It almost certainly doesn't matter; changing your `O(NM)` time to just `O(N+M)` will probably be more than enough of an optimization that you're happy with the results. But if you need to squeeze out more performance, then you'll have to implement all of them and test them on realistic data.
The only one of these that I think isn't obvious is how to "use the sorting". The idea is to use the same kind of staggered-zip iteration that you'd use in a merge sort, like this:
```
def sorted_subtract(seq1, seq2):
i1, i2 = 0, 0
while i1 < len(seq1):
if seq1[i1] != seq2[i2]:
i2 += 1
if i2 == len(seq2):
yield from seq1[i1:]
return
else:
yield seq1[i1]
i1 += 1
``` | 475 |
59,160,291 | Is there a way how to simplify this static methods in python? I'm looking to reduce typing of the arguments every time I need to use a function.
```
class Ibeam:
def __init__ (self, b1, tf1, tw, h, b2, tf2, rt, rb):
self.b1 = b1
self.tf1 = tf1
self.tw = tw
self.h = h
self.b2 = b2
self.tf2 = tf2
self.rt = rt
self.rb = rb
def area (b1, tf1, tw, h, b2, tf2, rt, rb):
dw = h - tf1 - tf2
area = b1*tf1+tw*dw+b2*tf2+2*circularspandrel.area(rt)+2*circularspandrel.area(rb)
return area
def distToOriginZ (b1, tf1, tw, h, b2, tf2, rt, rb):
dw = h - tf1 - tf2
Dist = collections.namedtuple('Dist', 'ytf1 yw ytf2')
dist = Dist(ytf1 = h - rectangle.centroid(b1,tf1).ez, yw = rectangle.centroid(tw,dw).ez + tf2, ytf2 = rectangle.centroid(b2,tf2))
return dist
def areaMoment (b1, tf1, tw, h, b2, tf2, rt, rb):
dw = h - tf1 - tf2
sum = (rectangle.area(b1, tf1)*Ibeam.distToOriginZ(b1, tf1, tw, h, b2, tf2, rt, rb).ytf1) + (rectangle.area(tw, dw)*Ibeam.distToOriginZ(b1, tf1, tw, h, b2, tf2, rt, rb).yw) + (rectangle.area(b2,tf2)*Ibeam.distToOriginZ(b1, tf1, tw, h, b2, tf2, rt, rb).ytf2)
return sum
def centroidZ (b1, tf1, tw, h, b2, tf2, rt, rb):
ez = Ibeam.areaMoment (b1, tf1, tw, h, b2, tf2, rt, rb)/Ibeam.area(b1, tf1, tw, h, b2, tf2, rt, rb)
return ez
``` | 2019/12/03 | [
"https://Stackoverflow.com/questions/59160291",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448859/"
] | You could use good default values if such exist.
```py
def area(b1=None, tf1=None, tw=None, h=None, b2=None, tf2=None, rt=None, rb=None):
....
```
An even better solution would be to design your class in a way that it does not require so many parameters. | When having functions with many arguments it might be useful to think about "related" arguments and group them together. For example, consider a function that calculates the distance between two points. You could write a function like the following:
```
def distance(x1, y1, x2, y2):
...
return distance
print(distance(1, 2, 3, 4))
```
In that case, the values `x1, y1` and `x2, y2` are both very closely related together. you could group these together. Python gives you many options, some more expressive, some less expressive.
Your code examples look very similar, and I believe you could benefit from grouping them together.
The advantages of grouping related variables together are mainly that you reduce the number of required arguments (what you ask for), but *most importantly it gives you a chance to document these variables by giving them better names*.
```
"""
Simple Tuples
"""
def distance(point_a, point_b):
x1, y1 = point_a
x2, y2 = point_b
...
return distance
print(distance((1, 2), (3, 4)))
```
This is a quick-win, but it is not very expressive. So you could spice this up with named-tuples (typed or untyped) or even a full-blown object. For example:
```
"""
Simple named tuples (very similar to tuples, but better error-messages/repr)
"""
from collections import namedtuple
Point = namedtuple('Point', 'x, y')
def distance(point_a, point_b):
x1, y1 = point_a
x2, y2 = point_b
...
return distance
print(distance(Point(1, 2), Point(3, 4)))
```
```
"""
Typed named tuples (in case you want to use typing)
"""
from typing import NamedTuple
Point = NamedTuple('Point', [
('x', float),
('y', float),
])
def distance(point_a: Point, point_b: Point) -> float:
x1, y1 = point_a
x2, y2 = point_b
...
return distance
print(distance(Point(1, 2), Point(3, 4)))
```
```
"""
Custom Object
"""
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def distance(point_a: Point, point_b: Point) -> float:
x1, y1 = point_a.x, point_a.y
x2, y2 = point_b.x, point_b.y
...
return distance
print(distance(Point(1, 2), Point(3, 4)))
``` | 476 |
59,493,383 | I'm currently working on a project and I am having a hard time understanding how does the Pandas UDF in PySpark works.
I have a Spark Cluster with one Master node with 8 cores and 64GB, along with two workers of 16 cores each and 112GB. My dataset is quite large and divided into seven principal partitions consisting each of ~78M lines. The dataset consists of 70 columns.
I defined a Pandas UDF in to do some operations on the dataset, that can only be done using Python, on a Pandas dataframe.
The pandas UDF is defined this way :
```
@pandas_udf(schema, PandasUDFType.GROUPED_MAP)
def operation(pdf):
#Some operations
return pdf
spark.table("my_dataset").groupBy(partition_cols).apply(operation)
```
There is absolutely no way to get the Pandas UDF to work as it crashes before even doing the operations. I suspect there is an OOM error somewhere. The code above runs for a few minutes before crashing with an error code stating that the connection has reset.
However, if I call the .toPandas() function after filtering on one partition and then display it, it runs fine, with no error. The error seems to happen only when using a PandasUDF.
I fail to understand how it works. Does Spark try to convert one whole partition at once (78M lines) ? If so, what memory does it use ? The driver memory ? The executor's ? If it's on the driver's, is all Python code executed on it ?
The cluster is configured with the following :
* SPARK\_WORKER\_CORES=2
* SPARK\_WORKER\_MEMORY=64g
* spark.executor.cores 2
* spark.executor.memory 30g (to allow memory for the python instance)
* spark.driver.memory 43g
Am I missing something or is there just no way to run 78M lines through a PandasUDF ? | 2019/12/26 | [
"https://Stackoverflow.com/questions/59493383",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5932364/"
] | >
> Does Spark try to convert one whole partition at once (78M lines) ?
>
>
>
That's exactly what happens. Spark 3.0 adds support for chunked UDFs, which operate on iterators of Pandas `DataFrames` or `Series`, but if *operations on the dataset, that can only be done using Python, on a Pandas dataframe*, these might not be the right choice for you.
>
> If so, what memory does it use ? The driver memory? The executor's?
>
>
>
Each partition is processed locally, on the respective executor, and data is passed to and from Python worker, using Arrow streaming.
>
> Am I missing something or is there just no way to run 78M lines through a PandasUDF?
>
>
>
As long as you have enough memory to handle Arrow input, output (especially if data is copied), auxiliary data structures, as well as as JVM overhead, it should handle large datasets just fine.
But on such tiny cluster, you'll be better with partitioning the output and reading data directly with Pandas, without using Spark at all. This way you'll be able to use all the available resources (i.e. > 100GB / interpreter) for data processing instead of wasting these on secondary tasks (having 16GB - overhead / interpreter). | To answer the general question about using a Pandas UDF on a large pyspark dataframe:
If you're getting out-of-memory errors such as
`java.lang.OutOfMemoryError : GC overhead limit exceeded` or `java.lang.OutOfMemoryError: Java heap space` and increasing memory limits hasn't worked, ensure that pyarrow is enabled. It is disabled by default.
In pyspark, you can enable it using:
`spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")`
More info [here](https://spark.apache.org/docs/3.0.1/sql-pyspark-pandas-with-arrow.html). | 479 |
20,317,792 | I want my interactive bash to run a program that will ultimately do things like:
echo Error: foobar >/dev/tty
and in another(python) component tries to prompt for and read a password from /dev/tty.
I want such reads and writes to fail, but not block.
Is there some way to close /dev/tty in the parent script and then run the program?
I tried
foo >&/tmp/outfile
which does not work.
What does sort of work is the 'at' command:
at now
at> foobar >&/tmp/outfile | 2013/12/01 | [
"https://Stackoverflow.com/questions/20317792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/727810/"
] | You are doing a [UNION ALL](http://dev.mysql.com/doc/refman/5.0/en/union.html)
`at_tot` results are being appended to `a_tot`.
`at_prix` results are being appended to `a_tva`.
`at_pax` results are being appended to `v_tot`.
`at_vente` results are being appended to `v_tva`.
The [SQL UNION ALL](http://www.w3schools.com/sql/sql_union.asp) query allows you to combine the result sets of 2 or more SELECT statements. It returns *all rows* from the query (even if the row exists in more than one of the SELECT statements.). So rows are appended, NOT columns.
EDIT:
Now based on your comments, its simply that you are writing your code as though 8 columns are going to be returned, but you are only getting 4 columns, with 2 rows.
This would work, though returning different data for each row is not recommended.
```
var i = 0;
while (reader.Read())
{
if(i == 0){
MyArray[0] = reader["a_tot"].ToString();
MyArray[1] = reader["a_tva"].ToString();
MyArray[2] = reader["v_tot"].ToString();
MyArray[3] = reader["v_tva"].ToString();
i++;
}
else{
MyArray[0] = reader["at_tot"].ToString();
MyArray[1] = reader["at_prix"].ToString();
MyArray[2] = reader["at_pax"].ToString();
MyArray[3] = reader["at_vente"].ToString();
}
}
``` | When you use UNION , the alias that end up in the result are the one from the first select in the union. So `at_tot` (from second select of union) is replaced with a\_tot.
What you do is the same as doing:
```sql
SELECT SUM(IF(status=0,montant,0)) AS a_tot,
SUM(IF(status=0, montant * (tvaval/100),0)) AS a_tva,
SUM(IF(status= 1, montant,0)) AS v_tot,
SUM(IF(status=1, montant * (tvaval/100),0)) AS v_tva
FROM StockData
UNION ALL
SELECT
SUM(at.prix*at.pax),
SUM(at.prix),
SUM(at.pax),
SUM(at.vente)
FROM Atelier AS at
```
You have to put the alias you want in the output in the first select, as you will end up with only 4 columns, not 8 like you are trying to get in your picture. | 480 |
49,757,771 | So I wrote a python file creating the single topology ( just to check if custom topology works) without using any controller at first. the code goes:
```
#!/usr/bin/python
from mininet.node import CPULimitedHost, Host, Node
from mininet.node import OVSSwitch
from mininet.topo import Topo
class Single1(Topo):
"Single Topology"
def __init__(self):
"Create Fat tree Topology"
Topo.__init__(self)
#Add hosts
h1 = self.addHost('h1', cls=Host, ip='10.0.0.1', defaultRoute=None)
h2 = self.addHost('h2', cls=Host, ip='10.0.0.2', defaultRoute=None)
h3 = self.addHost('h3', cls=Host, ip='10.0.0.3', defaultRoute=None)
#Add switches
s1 = self.addSwitch('s1', cls=OVSSwitch)
#Add links
self.addLink(h1,s1)
self.addLink(h2,s1)
self.addLink(h3,s1)
topos = { 'mytopo': (lambda: Single1() ) }
```
Pingall doesn't work when I run :
```
sudo mn --custom single.py --topo mytopo
```
Although it does work for predefined 'single' topology. Could someone help me with the problem? | 2018/04/10 | [
"https://Stackoverflow.com/questions/49757771",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7463091/"
] | This is an older question and probably no longer of interest to the original poster, but I landed here from a mininet related search so I thought I'd provide a working example in case other folks find there way here in the future.
First, there are a number of indentation problems with the posted code, but those are simple to correct.
Next, the logic has been implemented in `Single1.__init__`, but at least according to [the documentation](http://mininet.org/walkthrough/#custom-topologies) this should be in the `build` method.
Correcting both of those issues and removing the unnecessary use of
`host=Host` and `defaultRoute=None` in the `addHost` calls gives us:
```
#!/usr/bin/python
from mininet.node import OVSSwitch
from mininet.topo import Topo
class Single1(Topo):
"Single Topology"
def build(self):
"Create Fat tree Topology"
#Add hosts
h1 = self.addHost('h1', ip='10.0.0.1')
h2 = self.addHost('h2', ip='10.0.0.2')
h3 = self.addHost('h3', ip='10.0.0.3')
#Add switches
s1 = self.addSwitch('s1', cls=OVSSwitch)
#Add links
self.addLink(h1,s1)
self.addLink(h2,s1)
self.addLink(h3,s1)
topos = { 'mytopo': Single1 }
```
The above code will run without errors and build the topology, but will probably still present the original problem: using `cls=OVSSwitch` when creating the switch means that Mininet expects there to exist an OpenFlow controller to manage the switch, which in general won't exist by default.
The simplest solution is to change:
```
s1 = self.addSwitch('s1', cls=OVSSwitch)
```
To:
```
s1 = self.addSwitch('s1', cls=OVSBridge)
```
With this change, Mininet will configure a "standalone" switch that doesn't require an explicit controller, and we will have the expected connectivity. The final version of the code looks like:
```
#!/usr/bin/python
from mininet.topo import Topo
from mininet.node import OVSBridge
class Single1(Topo):
"Single Topology"
def build(self):
"Create Fat tree Topology"
#Add hosts
h1 = self.addHost('h1', ip='10.0.0.1')
h2 = self.addHost('h2', ip='10.0.0.2')
h3 = self.addHost('h3', ip='10.0.0.3')
#Add switches
s1 = self.addSwitch('s1', cls=OVSBridge)
#Add links
self.addLink(h1,s1)
self.addLink(h2,s1)
self.addLink(h3,s1)
topos = { 'mytopo': Single1 }
```
And running it looks like:
```
[root@servera ~]# mn --custom example.py --topo mytopo
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 h3
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1) (h3, s1)
*** Configuring hosts
h1 h2 h3
*** Starting controller
c0
*** Starting 1 switches
s1 ...
*** Starting CLI:
mininet> h1 ping -c2 h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.051 ms
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.051/0.185/0.320/0.134 ms
mininet>
``` | The hosts must be in same subnet in order to avoid routing protocols. Otherwise you need static routes | 481 |
2,332,164 | I use python debugger pdb. I use emacs for python programming. I use python-mode.el. My idea is to make emacs intuitive. So I need the following help for python programs (.py)
1. Whenever I press 'F9' key, the emacs should put "import pdb; pdb.set\_trace();" statements in the current line and move the current line to one line below.
Sentence to be in same line. smart indentation may help very much.
2. Wherever "import pdb; pdb.set\_trace();" statement presents in the python code, emacs should display left indicator and highlight that line.
3. When I press 'Alt-F9' keys at the current line and emacs found the "import pdb; pdb.set\_trace();" statement then, emacs should remove the "import pdb; pdb.set\_trace();" line and move the current line to one up.
4. Whenever I press "F8" key, emacs to jump to "import pdb; pdb.set\_trace();" in the same buffer.
I am trying to learn elisp and catch up lisp soon to customize emacs myself. I will appreciate your answers.
The answer shall be great enough for me and others who find this solution is very useful. | 2010/02/25 | [
"https://Stackoverflow.com/questions/2332164",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | to do 1)
```
(defun add-py-debug ()
"add debug code and move line down"
(interactive)
(move-beginning-of-line 1)
(insert "import pdb; pdb.set_trace();\n"))
(local-set-key (kbd "<f9>") 'add-py-debug)
```
to do 2) you probably have to change the syntax highlighting of the python mode, or write you own minor mode. You'd have to look into font-lock to get more. Sorry.
to do 3) though I've set this to be C-c F9 instead of Alt-F9
```
(defun remove-py-debug ()
"remove py debug code, if found"
(interactive)
(let ((x (line-number-at-pos))
(cur (point)))
(search-forward-regexp "^[ ]*import pdb; pdb.set_trace();")
(if (= x (line-number-at-pos))
(let ()
(move-beginning-of-line 1)
(kill-line 1)
(move-beginning-of-line 1))
(goto-char cur))))
(local-set-key (kbd "C c <f9>") 'remove-py-debug)
```
and to do 4)
```
(local-set-key (kbd "<f3>") '(lambda ()
(interactive)
(search-forward-regexp "^[ ]*import pdb; pdb.set_trace();")
(move-beginning-of-line 1)))
```
Note, this is not the best elisp code in the world, but I've tried to make it clear to you what's going on rather than make it totally idiomatic. The GNU Elsip book is a great place to start if you want to do more with elisp.
HTH | I've found that [Xah's Elisp Tutorial](http://xahlee.info/emacs/emacs/elisp.html) is an excellent starting point in figuring out the basics of Emacs Lisp programming. [There](https://sites.google.com/site/steveyegge2/effective-emacs) [are](https://steve-yegge.blogspot.com/2008/01/emergency-elisp.html) [also](https://steve-yegge.blogspot.com/2006/06/shiny-and-new-emacs-22.html) some SteveY articles from a while ago that go through techniques you might find useful for learning the basics.
If you're serious about making an amended Python mode, you'll do well to take a look at [Writing GNU Emacs Extensions](https://www.google.ca/search?hl=en&q=Writing%20GNU%20Emacs%20Extensions&gws_rd=ssl), which is available as a PDF.
Finally, the most useful resource for me is actually Emacs itself. I make frequent use of `M-x apropos` and `M-x describe-key` to figure out how built-in functions work, and whether there's something already in place to do what I want.
The specific things you want to look like they can be done through some simple use of `insert`, and a few search/replace functions, so that'll be a good starting point. | 484 |
41,936,098 | I am trying to install the `zipline` module using `"pip install zipline"` but I get this exception:
```
IOError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/editor.pyc'` - any help would be greatly appreciated
Failed building wheel for numexpr
Running setup.py clean for numexpr
Failed to build numexpr
Installing collected packages: python-editor, Mako, sqlalchemy, alembic, sortedcontainers, intervaltree, python-dateutil, numpy, numexpr, toolz, bottleneck, scipy, pytz, pandas, empyrical, requests, requests-file, requests-ftp, pandas-datareader, decorator, networkx, patsy, statsmodels, click, Logbook, multipledispatch, bcolz, Cython, contextlib2, cyordereddict, cachetools, zipline
Exception:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 342, in run
prefix=options.prefix_path,
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 784, in install
**kwargs
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 851, in install
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 1064, in move_wheel_files
isolated=self.isolated,
File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 345, in move_wheel_files
clobber(source, lib_dir, True)
File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 323, in clobber
shutil.copyfile(srcfile, destfile)
File "/usr/lib/python2.7/shutil.py", line 83, in copyfile
with open(dst, 'wb') as fdst:
IOError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/editor.pyc'
``` | 2017/01/30 | [
"https://Stackoverflow.com/questions/41936098",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7283601/"
] | AS you are not root. You can use sudo to obtain superuser permissions:
```
sudo pip install zipline
```
Or else
**For GNU/Linux :**
On Debian-derived Linux distributions, you can acquire all the necessary binary dependencies from apt by running:
```
$ sudo apt-get install libatlas-base-dev python-dev gfortran pkg-config libfreetype6-dev
```
On recent RHEL-derived derived Linux distributions (e.g. Fedora), the following should be sufficient to acquire the necessary additional dependencies:
```
$ sudo dnf install atlas-devel gcc-c++ gcc-gfortran libgfortran python-devel redhat-rep-config
```
On Arch Linux, you can acquire the additional dependencies via pacman:
```
$ pacman -S lapack gcc gcc-fortran pkg-config
```
There are also AUR packages available for installing Python 3.4 (Archβs default python is now 3.5, but Zipline only currently supports 3.4), and ta-lib, an optional Zipline dependency. Python 2 is also installable via:
```
$ pacman -S python2
``` | Avoid using `sudo` to install packages with `pip`. Use the `--user` option instead or, even better, use virtual environments.
See [this SO answer](https://stackoverflow.com/a/42021993/3577054). I think this question is a duplicate of that one. | 485 |
60,917,385 | My aim:
To count the frequency of a user entered word in a text file.(in python)
I tried this.But it gives the frequency of all the words in the file.How can i modify it to give the frequency of a word entered by the user?
```
from collections import Counter
word=input("Enter a word:")
def word_count(test6):
with open('test6.txt') as f:
return Counter(f.read().split())
print("Number of input words in the file :",word_count(word))
```
This may be a naive question but I am just beginning to code.So please try to answer.
Thanks in advance. | 2020/03/29 | [
"https://Stackoverflow.com/questions/60917385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13140422/"
] | Hi I just solved the problem.
After you run
`docker build .`
run the `docker-compose build` instead of `docker-compose up`.
And then finally run `docker-compose up` | instead of
```
COPY Pipfile Pipfile.lock /code/
RUN pip install pipenv && pipenv install --system
```
you may use:
```
RUN pip install pipenv
COPY pipfile* /tmp
RUN cd /tmp && pipenv lock --requirements > requirements.txt
RUN pip install -r /tmp/requirements.txt
```
this is a snippet from [here](https://pythonspeed.com/articles/pipenv-docker/) | 486 |
42,216,370 | Installation of python-devel fails with attached message
Configuration is as follows:
- CentOS 7.2
- Python 2.7 Installed
1. I re-ran with yum load as suggested in output and it failed with same message.
2. yum info python ==> Installed package python 2.7.5 34.el7
3. yum info python-devel ==> NOT installed. Available 2.7.5 48.el7
4. yum deplist python-devel ==> dependency on python2.7.5-48.el7
5. Tried to install Python2.7.5-48.el7 wih "yum update python" and it fails with same error message as python-devel install.
Sudhir
```
yum install -y python-devel
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.sonic.net
* epel: ftp.linux.ncsu.edu
* extras: mirror.cogentco.com
* updates: www.gtlib.gatech.edu
Resolving Dependencies
--> Running transaction check
---> Package python-devel.x86_64 0:2.7.5-48.el7 will be installed
--> Processing Dependency: python(x86-64) = 2.7.5-48.el7 for package: python-devel-2.7.5-48.el7.x86_64
--> Running transaction check
---> Package python.x86_64 0:2.7.5-34.el7 will be updated
---> Package python.x86_64 0:2.7.5-48.el7 will be an update
--> Processing Dependency: python-libs(x86-64) = 2.7.5-48.el7 for package: python-2.7.5-48.el7.x86_64
--> Running transaction check
---> Package python-libs.x86_64 0:2.7.5-34.el7 will be updated
---> Package python-libs.x86_64 0:2.7.5-48.el7 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
```
================================================================================ Package Arch Version Repository Size
======================================================================================================================
```
Installing:
python-devel x86_64 2.7.5-48.el7 base 393 k
Updating for dependencies:
python x86_64 2.7.5-48.el7 base 90 k
python-libs x86_64 2.7.5-48.el7 base 5.6 M
Transaction Summary
==============================================================================================================================================
Install 1 Package
Upgrade ( 2 Dependent packages)
Total size: 6.1 M
Downloading packages:
Running transaction check
ERROR with transaction check vs depsolve:
python(abi) = 2.6 is needed by (installed) python-argparse-1.2.1-2.1.el6.noarch
python(abi) = 2.6 is needed by (installed) redhat-upgrade-tool-1:0.7.22-3.el6.centos.noarch
** Found 5 pre-existing rpmdb problem(s), 'yum check' output follows:
epel-release-7-6.noarch is a duplicate with epel-release-7-5.noarch
grep-2.20-3.el6_7.1.x86_64 has missing requires of libpcre.so.0()(64bit)
python-argparse-1.2.1-2.1.el6.noarch has missing requires of python(abi) = ('0', '2.6', None)
1:redhat-upgrade-tool-0.7.22-3.el6.centos.noarch has missing requires of preupgrade-assistant >= ('0', '1.0.2', '4')
1:redhat-upgrade-tool-0.7.22-3.el6.centos.noarch has missing requires of python(abi) = ('0', '2.6', None)
Your transaction was saved, rerun it with:
yum load-transaction /tmp/yum_save_tx.2017-02-13.16-01.jUFBE4.yumtx
``` | 2017/02/14 | [
"https://Stackoverflow.com/questions/42216370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5070752/"
] | From the yum documentation, here's the safest way to handle each of your 5 errors:
First remove duplicates and resolve any errors after running this:
```
package-cleanup --cleandupes
```
If the above comes with a missing package-cleanup error, then run this first:
```
yum install yum-utils
```
Then address the other 4 errors with:
```
yum reinstall grep-*
```
where grep-\* is the package name as shown in the error message. I abbreviated the rest of the grep version name with \* in the command above.
Repeat the above command for the 3 other packages that were indicated as missing. If yum command gives you errors, then try this for just that one package:
```
rpm -ivh --force grep-*
```
Then finally re-run the yum command from the original error message.
At any point you want to clean up leftover mess, run this command:
```
yum clean all
package-cleanup --problems
```
And follow directions. For further reference, look up documentation with
```
man yum.conf
``` | Removed packages python-argparse and redhat-upgrade-tool.
Then did a yum install python-devel and it succeed's this time. I am thinking there is a hard dependency for those 2 packages on older python 2.6.
Sudhir Nallagangu | 487 |
46,480,621 | I upgraded my ansible to 2.4 and now I cannot manage my CentOS 5 hosts which are running python 2.4. How do I fix it?
<http://docs.ansible.com/ansible/2.4/porting_guide_2.4.html> says ansible 2.4 will not support any versions of python lower than 2.6 | 2017/09/29 | [
"https://Stackoverflow.com/questions/46480621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4055115/"
] | After I upgraded to ansible 2.4 I was not able to manage hosts running python 2.6+. These were CentOS 5 hosts and this is how I fixed the problem.
First, I installed `python26` from epel repo. After enabling epel repo, `yum install python26`
Then in my hosts file, for the CentOS 5 hosts, I added `ansible_python_interpreter=/usr/bin/python26` as the python interpreter.
To specify the python interpreter in the hosts file individually, it will be something like
`centos5-database ansible_python_interpreter=/usr/bin/python26`
And for a group of hosts, it will be something like
`[centos5-www:vars]
ansible_python_interpreter=/usr/bin/python26` | And what about python26-yum package? It is required to use yum module to install packages using Ansible. | 489 |
57,588,744 | How do you quit or halt a python program without the error messages showing?
I have tried quit(), exit(), systemexit(), raise SystemExit, and others but they all seem to raise an error message saying the program has been halted. How do I get rid of this? | 2019/08/21 | [
"https://Stackoverflow.com/questions/57588744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11939397/"
] | you can structure your program within a function then `return` when you wish to halt/end the program
ie
```
def foo():
# your program here
if we_want_to_halt:
return
if __name__ == "__main__":
foo()
``` | You would need to handle the exit in your python program.
For example:
```
def main():
x = raw_input("Enter a value: ")
if x == "a value":
print("its alright")
else:
print("exit")
exit(0)
```
Note: This works in python 2 because raw\_input is included by default there but the concept is the same for both versions.
Output:
```
Enter a value: a
exit
```
Just out of curiousity: Why do you want to prevent the message? I prefer to see that my program has been closed because the user forced a system exit. | 492 |
60,327,453 | I am new to tensorflow and Convolutional Neural Networks, and I would like to build an AI that learns to find the mode of floating point numbers. But whenever I try to run the code, I run into some errors.
Here is my code so far:
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense
train_data = [
[0.5, 0.2, 0.2],
[0.3, 0.3, 0.4],
[0.4, 0.4, 0.5],
[0.8, 0.8, 0.1]
]
train_labels = [
2.0,
3.0,
4.0,
8.0
]
test_data = [
[0.2, 0.5, 0.2],
[0.7, 0.1, 0.7],
[0.6, 0.8, 0.8]
]
test_labels = [
2,
7,
8
]
model = keras.Sequential()
model.add(Dense(4, activation=tf.nn.relu, input_shape=(1,)))
model.add(Dense(2, activation=tf.nn.relu))
model.add(Dense(1, activation=tf.nn.softmax))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.summary()
EPOCHS = 2
BATCH_SIZE=1
model.fit(train_data, train_labels, epochs=EPOCHS, batch_size=BATCH_SIZE)
```
However, when I try and run the code I get the following errors:
```
Traceback (most recent call last):
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 511, in _apply_op_helper
preferred_dtype=default_dtype)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1175, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 977, in _TensorTensorConversionFunction
(dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: 'Tensor("metrics/acc/Cast_6:0", shape=(?, 1), dtype=float32)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "testNeural.py", line 38, in <module>
metrics=['accuracy'])
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\training\checkpointable\base.py", line 442, in _method_wrapper
method(self, *args, **kwargs)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 499, in compile
sample_weights=self.sample_weights)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1844, in _handle_metrics
return_stateful_result=return_stateful_result))
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1801, in _handle_per_output_metrics
metric_result = _call_stateless_fn(metric_fn)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1777, in _call_stateless_fn
return weighted_metric_fn(y_true, y_pred, weights=weights, mask=mask)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_utils.py", line 647, in weighted
score_array = fn(y_true, y_pred)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\keras\metrics.py", line 1533, in binary_accuracy
return K.mean(math_ops.equal(y_true, y_pred), axis=-1)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 3093, in equal
"Equal", x=x, y=y, name=name)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 547, in _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'Equal' Op has type float32 that does not match type int32 of argument 'x'.
```
Does anyone know how to fix this? | 2020/02/20 | [
"https://Stackoverflow.com/questions/60327453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Finally **SOLVED**:
**HTML:**
```
<mat-form-field>
<mat-label>Course</mat-label>
<mat-select
[formControl]="subjectControl"
[attr.data-tag]="this.subjectControl.value"
required
>
<mat-option>-- None --</mat-option>
<mat-optgroup *ngFor="let course of subjects" [label]="course.semester" [disabled]="course.disabled">
<mat-option *ngFor="let subject of course.courses" [value]="subject.subjectName"><!--here I have to set [value] to `.subjectName` or `.subjectSemester` to show it into the `data-tag`-->
{{ subject.subjectName }}
</mat-option>
</mat-optgroup>
</mat-select>
</mat-form-field>
```
As written in the comments of the code, I have to put the `[attr.data-tag]` into the `mat-select` equal to `this.subjectControl.value`, and set `[value]` of `mat-option` equal to the value to store into `[attr.data-tag]`. | Your code looks correct to me. I tried adding it to an existing stackblitz example, and it showed up in the HTML. Maybe it will help to figure it out:
<https://stackblitz.com/edit/angular-material-select-compare-with?embed=1&file=app/app.html>
```
<mat-option class="mat-option ng-star-inserted" data-tag="Three" role="option" ng-reflect-value="[object Object]" tabindex="0" id="mat-option-29" aria-selected="false" aria-disabled="false">
``` | 497 |
51,878,354 | Is there a built-in function that works like zip(), but fills the results so that the length of the resulting list is the length of the longest input and fills the list **from the left** with e.g. `None`?
There is already an [answer](https://stackoverflow.com/a/1277311/2648551) using [zip\_longest](https://docs.python.org/3/library/itertools.html#itertools.zip_longest) from `itertools` module and the corresponding [question](https://stackoverflow.com/q/1277278/2648551) is very similar to this. But with `zip_longest` it seems that you can only fill missing data from the right.
Here might be a use case for that, assuming we have names stored only like this (it's just an example):
```
header = ["title", "firstname", "lastname"]
person_1 = ["Dr.", "Joe", "Doe"]
person_2 = ["Mary", "Poppins"]
person_3 = ["Smith"]
```
There is no other permutation like (`["Poppins", "Mary"]`, `["Poppins", "Dr", "Mary"]`) and so on.
How can I get results like this using built-in functions?
```
>>> dict(magic_zip(header, person_1))
{'title': 'Dr.', 'lastname': 'Doe', 'firstname': 'Joe'}
>>> dict(magic_zip(header, person_2))
{'title': None, 'lastname': 'Poppins', 'firstname': 'Mary'}
>>> dict(magic_zip(header, person_3))
{'title': None, 'lastname': 'Smith', 'firstname': None}
``` | 2018/08/16 | [
"https://Stackoverflow.com/questions/51878354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2648551/"
] | Use **`zip_longest`** but reverse lists.
**Example**:
```
from itertools import zip_longest
header = ["title", "firstname", "lastname"]
person_1 = ["Dr.", "Joe", "Doe"]
person_2 = ["Mary", "Poppins"]
person_3 = ["Smith"]
print(dict(zip_longest(reversed(header), reversed(person_2))))
# {'lastname': 'Poppins', 'firstname': 'Mary', 'title': None}
```
On your use cases:
```
>>> dict(zip_longest(reversed(header), reversed(person_1)))
{'title': 'Dr.', 'lastname': 'Doe', 'firstname': 'Joe'}
>>> dict(zip_longest(reversed(header), reversed(person_2)))
{'lastname': 'Poppins', 'firstname': 'Mary', 'title': None}
>>> dict(zip_longest(reversed(header), reversed(person_3)))
{'lastname': 'Smith', 'firstname': None, 'title': None}
``` | Simply use `zip_longest` and read the arguments in the reverse direction:
```
In [20]: dict(zip_longest(header[::-1], person_1[::-1]))
Out[20]: {'lastname': 'Doe', 'firstname': 'Joe', 'title': 'Dr.'}
In [21]: dict(zip_longest(header[::-1], person_2[::-1]))
Out[21]: {'lastname': 'Poppins', 'firstname': 'Mary', 'title': None}
In [22]: dict(zip_longest(header[::-1], person_3[::-1]))
Out[22]: {'lastname': 'Smith', 'firstname': None, 'title': None}
```
Since the zip\* functions need to be able to work on general iterables, they don't support filling "from the left", because you'd need to exhaust the iterable first. Here we can just flip things ourselves. | 498 |
25,438,170 | Input:
```
A B C
D E F
```
This file is NOT exclusively tab-delimited, some entries are space-delimited to look like they were tab-delimited (which is annoying). I tried reading in the file with the `csv` module using the canonical tab delimited option hoping it wouldn't mind a few spaces (needless to say, my output came out botched with this code):
```
with open('file.txt') as f:
input = csv.reader(f, delimiter='\t')
for row in input:
print row
```
I then tried replacing the second line with `csv.reader('\t'.join(f.split()))` to try to take advantage of [Remove whitespace in Python using string.whitespace](https://stackoverflow.com/questions/1898656/remove-whitespace-in-python-using-string-whitespace/1898835#1898835) but my error was: `AttributeError: 'file' object has no attribute 'split'`.
I also tried examining [Can I import a CSV file and automatically infer the delimiter?](https://stackoverflow.com/questions/16312104/python-import-csv-file-delimiter-or) but here the OP imported either semicolon-delimited or comma-delimited files, but not a file which was a random mixture of both kinds of delimiters.
Was wondering if the `csv` module can handle reading in files with a mix of various delimiters or whether I should try a different approach (e.g., not use the `csv` module)?
I am hoping that there exists a way to read in a file with a mixture of delimiters and automatically turn this file into a tab-delimited file. | 2014/08/22 | [
"https://Stackoverflow.com/questions/25438170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3878253/"
] | Just use .split():
```
csv='''\
A\tB\tC
D E F
'''
data=[]
for line in csv.splitlines():
data.append(line.split())
print data
# [['A', 'B', 'C'], ['D', 'E', 'F']]
```
Or, more succinctly:
```
>>> [line.split() for line in csv.splitlines()]
[['A', 'B', 'C'], ['D', 'E', 'F']]
```
For a file, something like:
```
with open(fn, 'r') as fin:
data=[line.split() for line in fin]
```
It works because [str.split()](https://docs.python.org/2/library/stdtypes.html#str.split) will split on all whitespace between data elements even if more than 1 whitespace character or if mixed:
```
>>> '1\t\t\t2 3\t \t \t4'.split()
['1', '2', '3', '4']
``` | Why not just roll your own splitter rather than the CSV module?
```
delimeters = [',', ' ', '\t']
unique = '[**This is a unique delimeter**]'
with open(fileName) as f:
for l in f:
for d in delimeters: l = unique.join(l.split(d))
row = l.split(unique)
``` | 504 |
46,964,509 | I am following a tutorial on using selenium and python to make a web **scraper** for twitter, and I ran into this error.
```
File "C:\Python34\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 62, in __init__
self.service.start()
File "C:\Python34\lib\site-packages\selenium\webdriver\common\service.py", line 81, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home
```
I went to the website specified in the error and downloaded the driver. Then I added it to path by going to System Properties > Advanced > Environment Variables > Path > New and added the exe file to path. I tried again and i still got the error. | 2017/10/26 | [
"https://Stackoverflow.com/questions/46964509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7922147/"
] | Another way is download and uzip [chromedriver](https://chromedriver.storage.googleapis.com/index.html?path=2.33/) and put 'chromedriver.exe' in C:\Python27\Scripts and then you need not to provide the path of driver, just
```
driver= webdriver.Chrome()
```
will work | If you take a look to your exception:
```
selenium.common.exceptions.WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home
```
At the [indicated url](https://sites.google.com/a/chromium.org/chromedriver/home), you can see the [Getting started with ChromeDriver on Desktop (Windows, Mac, Linux)](https://sites.google.com/a/chromium.org/chromedriver/getting-started).
Where there is written:
>
> Any of these steps should do the trick:
>
>
> 1. include the ChromeDriver location in your PATH environment variable
> 2. (Java only) specify its location via the webdriver.chrome.driver system property (see sample below)
> 3. (Python only) include the path to ChromeDriver when instantiating webdriver.Chrome (see sample below)
>
>
>
If you are not able to include your ChromeDriver location in your PATH environment variable, you could try with the third one option:
```
import time
from selenium import webdriver
driver = webdriver.Chrome('/path/to/chromedriver') # Optional argument, if not specified will search path.
driver.get('http://www.google.com');
``` | 507 |
4,663,024 | Hey, I would like to be able to perform [this](https://stackoverflow.com/questions/638048/how-do-i-sum-the-first-value-in-each-tuple-in-a-list-of-tuples-in-python) but with being selective for which lists I sum up. Let's say, that same example, but with only adding up the first number from the 3rd and 4th list. | 2011/01/11 | [
"https://Stackoverflow.com/questions/4663024",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/556344/"
] | Something like:
```
sum(int(tuple_list[i][0]) for i in range(3,5))
```
range(x, y) generates a list of integers from x(included) to y(excluded) and 1 as the step. If you want to change the `range(x, y, step)` will do the same but increasing by step.
You can find the official documentation [here](http://docs.python.org/library/functions.html#range)
Or you can do:
```
sum(float(close[4]) for close in tickers[30:40])
``` | If you want to limit by some property of each element, you can use [`filter()`](http://docs.python.org/library/functions.html#filter) before feeding it to the code posted in your link. This will let you write a unique filter depending on what you want. This doesn't work for the example you gave, but it seemed like you were more interested in the general case.
```
sum(pair[0] for pair in filter(PREDICATE_FUNCTION_OR_LAMBDA, list_of_pairs))
``` | 508 |
64,773,690 | I'm new to python and I'm trying to use the census geocoding services API to geocode addresses then convert the output to a dataframe. I've been able to read in my address file and I can see the output, but I can't seem to figure out how to import it into a dataframe. I provided the code I used below as well as the contents of the address file.
The output does not appear to be in JSON format, but rather CSV. I tried to import the output as I would a CSV file, but I couldn't figure out how to import the variable as I would a CSV file and I couldn't figure out how to export the output to a CSV file that I could import.
The URL describing the API is <https://geocoding.geo.census.gov/geocode...es_API.pdf>
```
import requests
import pandas as pd
import json
url = 'https://geocoding.geo.census.gov/geocoder/geographies/addressbatch'
payload = {'benchmark':'Public_AR_Current','vintage':'Current_Current'}
files = {'addressFile': ('C:\PYTHON_CLASS\CSV\ADDRESS_SAMPLE.csv', open('C:\PYTHON_CLASS\CSV\ADDRESS_SAMPLE.csv', 'rb'), 'text/csv')}
response = requests.post(url, files=files, data = payload)
type(response)
print(response.text)
```
-I tried the code below (among many other versions) which is how I would normally import a CSV file, but it generates an error message "Invalid file path or buffer object type: <class 'requests.models.Response'>"
```
df = pd.read_csv(response)
```
The contents of the address file I used to generate the geocoding is:
id,address,city,state,zipcode
1,1600 Pennsylvania Avenue NW, Washington,DC,20500
2,4 S Market St,Boston,MA,02109
3,1200 Getty Center Drive,Los Angeles,CA,90049
4,1800 Congress Ave,Austin,TX,78701
5,One Caesars Palace Drive,Las Vegas,NV,89109
6,1060 West Addison,Chicago,IL,60613
7,One East 161st Street,Bronx,NY,10451
8,201 E Jefferson St,Phoenix,AZ,85004
9,600 N 1st Ave,Minneapolis,MN,55403
10,400 W Church St,Orlando,FL,32801
The output is shown below:
```
print(response.text)
```
"1","1600 Pennsylvania Avenue NW, Washington, DC, 20500","Match","Non\_Exact","1600 PENNSYLVANIA AVE NW, WASHINGTON, DC, 20006","-77.03535,38.898754","76225813","L","11","001","006202","1031"
"2","4 S Market St, Boston, MA, 02109","Match","Exact","4 S MARKET ST, BOSTON, MA, 02109","-71.05566,42.359936","85723841","R","25","025","030300","2017"
"3","1200 Getty Center Drive, Los Angeles, CA, 90049","Match","Exact","1200 GETTY CENTER DR, LOS ANGELES, CA, 90049","-118.47564,34.08857","142816014","L","06","037","262302","1005"
"4","1800 Congress Ave, Austin, TX, 78701","Match","Exact","1800 CONGRESS AVE, AUSTIN, TX, 78701","-97.73847,30.279745","63946318","L","48","453","000700","1007"
"5","One Caesars Palace Drive, Las Vegas, NV, 89109","No\_Match"
"6","1060 West Addison, Chicago, IL, 60613","Match","Non\_Exact","1060 W ADDISON ST, CHICAGO, IL, 60613","-87.65581,41.947227","111863716","R","17","031","061100","1014"
"7","One East 161st Street, Bronx, NY, 10451","No\_Match"
"8","201 E Jefferson St, Phoenix, AZ, 85004","Match","Exact","201 E JEFFERSON ST, PHOENIX, AZ, 85004","-112.07113,33.44675","128300920","L","04","013","114100","1058"
"9","600 N 1st Ave, Minneapolis, MN, 55403","No\_Match"
"id","address, city, state, zipcode","No\_Match"
"10","400 W Church St, Orlando, FL, 32801","Match","Exact","400 W CHURCH ST, ORLANDO, FL, 32801","-81.38436,28.540176","94416807","L","12","095","010500","1002"
The output for `response.text` is:
'"1","1600 Pennsylvania Avenue NW, Washington, DC, 20500","Match","Non\_Exact","1600 PENNSYLVANIA AVE NW, WASHINGTON, DC, 20006","-77.03535,38.898754","76225813","L","11","001","006202","1031"\n"2","4 S Market St, Boston, MA, 02109","Match","Exact","4 S MARKET ST, BOSTON, MA, 02109","-71.05566,42.359936","85723841","R","25","025","030300","2017"\n"3","1200 Getty Center Drive, Los Angeles, CA, 90049","Match","Exact","1200 GETTY CENTER DR, LOS ANGELES, CA, 90049","-118.47564,34.08857","142816014","L","06","037","262302","1005"\n"4","1800 Congress Ave, Austin, TX, 78701","Match","Exact","1800 CONGRESS AVE, AUSTIN, TX, 78701","-97.73847,30.279745","63946318","L","48","453","000700","1007"\n"5","One Caesars Palace Drive, Las Vegas, NV, 89109","No\_Match"\n"6","1060 West Addison, Chicago, IL, 60613","Match","Non\_Exact","1060 W ADDISON ST, CHICAGO, IL, 60613","-87.65581,41.947227","111863716","R","17","031","061100","1014"\n"7","One East 161st Street, Bronx, NY, 10451","No\_Match"\n"8","201 E Jefferson St, Phoenix, AZ, 85004","Match","Exact","201 E JEFFERSON ST, PHOENIX, AZ, 85004","-112.07113,33.44675","128300920","L","04","013","114100","1058"\n"9","600 N 1st Ave, Minneapolis, MN, 55403","No\_Match"\n"id","address, city, state, zipcode","No\_Match"\n"10","400 W Church St, Orlando, FL, 32801","Match","Exact","400 W CHURCH ST, ORLANDO, FL, 32801","-81.38436,28.540176","94416807","L","12","095","010500","1002"\n'
When I tried
```
df = pd.read_csv(io.StringIO(response), sep=',', header=None, quoting=csv.QUOTE_ALL)
```
I got the error message
```
TypeError Traceback (most recent call last)
<ipython-input-60-55e6c5ac54af> in <module>
----> 1 df = pd.read_csv(io.StringIO(response), sep=',', header=None, quoting=csv.QUOTE_ALL)
TypeError: initial_value must be str or None, not Response
```
When I tried
```
df = pd.read_csv(io.StringIO(response.replace('" "', '"\n"')), sep=',', header=None, quoting=csv.QUOTE_ALL)
```
I got
```
AttributeError Traceback (most recent call last)
<ipython-input-61-a92a7ffcf170> in <module>
----> 1 df = pd.read_csv(io.StringIO(response.replace('" "', '"\n"')), sep=',', header=None, quoting=csv.QUOTE_ALL)
AttributeError: 'Response' object has no attribute 'replace'
``` | 2020/11/10 | [
"https://Stackoverflow.com/questions/64773690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14614221/"
] | To address both the legend and palette issue at the same time. First you could convert the data frame into long format using `pivot_longer()`, then add a column that specifies the colour you want with the associated variable. You can map those colours using `scale_colour_manual()`. Not the most elegant solution but I found it useful when dealing with manually set palettes.
```
library(ggplot2)
library(dplyr)
library(tidyr)
library(tibble)
df <- data.frame(date = as.Date(c("2020-08-05","2020-08-06","2020-08-07","2020-08-08","2020-08-09","2020-08-10","2020-08-11","2020-08-12")),
State.1_day=c(0.8,0.3,0.2,0.5,0.6,0.7,0.8,0.7),
State.2_day=c(0.4,0.2,0.1,0.2,0.3,0.4,0.5,0.6),
State.1_night=c(0.7,0.8,0.5,0.4,0.3,0.2,0.3,0.2),
State.2_night=c(0.5,0.6,0.7,0.4,0.3,0.5,0.6,0.7))
line_colors_a <- RColorBrewer::brewer.pal(6, "Blues")[c(3,6)]
line_colors_a
line_colors_b <- RColorBrewer::brewer.pal(6, "Greens")[c(3,6)]
line_colors_b
line_colors <- c(line_colors_a,line_colors_b)
df1 <- df %>%
pivot_longer(-date) %>%
mutate(colour = case_when(
name == "State.1_day" ~ line_colors[1],
name == "State.1_night" ~ line_colors[2],
name == "State.2_day" ~ line_colors[3],
name == "State.2_night" ~ line_colors[4]
))
ggplot(df1, aes(x = date, y = value, colour = name)) +
geom_line(size = 1) +
scale_x_date(date_labels = "%Y-%m-%d") +
scale_colour_manual(values = tibble::deframe(distinct(df1, colour, name))) +
theme_bw() +
labs(y = "% time", x = "Date") +
theme(strip.text = element_text(face="bold", size=18),
strip.background=element_rect(fill="white", colour="black",size=2),
axis.title.x =element_text(margin = margin(t = 10, r = 0, b = 0, l = 0),size = 20),
axis.title.y =element_text(margin = margin(t = 0, r = 10, b = 0, l = 0),size = 20),
axis.text.x = element_text(angle = 70, hjust = 1,size = 15),
axis.text.y = element_text(angle = 0, hjust = 0.5,size = 15),
axis.line = element_line(),
panel.grid.major= element_blank(),
panel.grid.minor = element_blank(),
legend.text=element_text(size=18),
legend.title = element_text(size=19, face = "bold"),
legend.key=element_blank(),
legend.position = "top",
panel.border = element_blank(),
strip.placement = "outside")
```
[![enter image description here](https://i.stack.imgur.com/WS2sf.png)](https://i.stack.imgur.com/WS2sf.png) | Since @EJJ's reply did not work for some reason, I used a similar approach but using `melt()`. Here is the code and the plot:
```
colnames(df) <- c("date","Act_day","Rest_day","Act_night","Rest_night")
df <- melt(df, id.vars=c("date"))
colnames(df) <- c("date","State","value")
Plot <- ggplot(df,aes(x = date, y = value, colour = State)) +
geom_line(size = 1) +
scale_x_date(labels = date_format("%Y-%m-%d")) +
scale_color_discrete(name = "States", labels = c("Active_day", "Active_night", "Resting_day", "Resting_night")) +
theme_bw() +
labs(y = "% time", x = "Date") +
theme(strip.text = element_text(face="bold", size=18),
strip.background=element_rect(fill="white", colour="black",size=2),
axis.title.x =element_text(margin = margin(t = 10, r = 0, b = 0, l = 0),size = 20),
axis.title.y =element_text(margin = margin(t = 0, r = 10, b = 0, l = 0),size = 20),
axis.text.x = element_text(angle = 70, hjust = 1,size = 15),
axis.text.y = element_text(angle = 0, hjust = 0.5,size = 15),
axis.line = element_line(),
panel.grid.major= element_blank(),
panel.grid.minor = element_blank(),
legend.text=element_text(size=18),
legend.title = element_text(size=19, face = "bold"),
legend.key=element_blank(),
legend.position = "top",
panel.border = element_blank(),
strip.placement = "outside") +
scale_color_manual(values = c("Act_day" = line_colors[1], "Act_night" = line_colors[2], "Rest_day" = line_colors[3], "Rest_night" = line_colors[4]))
Plot
```
[![enter image description here](https://i.stack.imgur.com/YD4qw.png)](https://i.stack.imgur.com/YD4qw.png) | 513 |
26,345,185 | Iβm having trouble using pythonβs multiprocessing module. This is the first time Iβve tried using the module. Iβve tried simplifying my processing to the bare bones, but keep getting the same error. Iβm using python 2.7.2, and Windows 7.
The script Iβm trying to run is called `learnmp.py`, and the error message says that the problem is that it can't find module `learnmp`.
```
import multiprocessing
def doSomething():
"""worker function"""
print 'something'
return
if __name__ == '__main__':
jobs = []
for i in range(2):
p = multiprocessing.Process(target=doSomething)
jobs.append(p)
p.start()
```
The error is :
```
File β<string>β, line 1, in <module> File βC:\Python27\ArcGISx6410.1\lib\multiprocessing\forking.pyβ, line 373,
in main prepare(preparation_data) File βC:\Python27\ArcGISx6410.1\lib\multiprocessing\forking.pyβ, line 482,
in prepare file, path_name, etc = imp.find_module (main_name, dirs)
ImportError: No module named learnmp
```
Whatβs causing the error, and how can I solve it?
EDIT: I still don't know what was causing the error, but changing the file name eliminated it. | 2014/10/13 | [
"https://Stackoverflow.com/questions/26345185",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2241053/"
] | I know it's been a while, but I ran into this same error, also using the version of Python distributed with ArcGIS, and I've found a solution which at least worked in my case.
The problem that I had was that I was calling my program name, Test.py, as test.py. Note the difference in case.
```
c:\python27\arcgisx6410.2\python.exe c:\temp\test.py
c:\python27\arcgisx6410.2\python.exe c:\temp\Test.py
```
This isn't normally an issue if you're not using the multiprocessing library. However, when you write:
```
if __name__ == '__main__':
```
what appears to be happening is that the part of the program in main is being bound to the name of the python file. In my case that was test. However, there is no test, just Test. So although Windows will allow case-incorrect filenames in cmd, PowerShell, and in batch files, Python's multiprocessing library balks at this and throws a nasty series of errors.
Hopefully this helps someone. | Looks like you might be going down a rabbit-hole looking into `multiprocessing`. As the traceback shows, your python install is trying to look in the ArcGIS version of python before actually looking at your system install.
My guess is that the version of python that ships with ArcGIS is slightly customized for some reason or another and can't find your python script. The question then becomes:
>
> Why is your Windows machine looking in ArcGIS for python?
>
>
>
Without looking at your machine at a slightly lower level I can't quite be sure, but if I had to guess, you probably added the ArcGIS directory to your `PATH` variable in front of the standard python directory, so it looks in ArcGIS first. If you move the ArcGIS path to the end of your `PATH` variable it should resolve the problem.
Changing your `PATH` variable: <http://www.computerhope.com/issues/ch000549.htm> | 514 |
74,113,894 | I have a request respond from api and it looks like this:
```
'224014@@@1;1=8.4=0;2=33=0;3=9.4=0@@@2;1=15=0;2=3.3=1;3=4.2=0;4=5.7=0;5=9.4=0;6=22=0@@@3;1=17=0;2=7.4=0;3=27=0@@@4;1=14=0;2=7.8=0;3=5.9=0;4=23=0;5=4.0=1'
```
I had splited them for your EASY READING with some explaination:
```
[1]The 6 digital numbers string mens UPDATE TIME.
[2]It sets apart something like'@@@X'and the X means Race No.
[3]For each race (after '@@@X'),there is a pattern for each horse.
[4]For each horse,Horse_No,Odd & status are inside the pattern(eg:1=8.4=0)and they were
connected using '='
[5]Number of races and number of horses are not certain(maybe more or less)
(UPDATE TIME)'224014
(Race 1)@@@1;1=8.4=0;2=33=0;3=9.4=0
(Race 2)@@@2;1=15=0;2=3.3=1;3=4.2=0;4=5.7=0;5=9.4=0;6=22=0
(Race 3)@@@3;1=17=0;2=7.4=0;3=27=0
(Race 4)@@@4;1=14=0;2=7.8=0;3=5.9=0;4=23=0;5=4.0=1'
```
Expcet output using python (i guess regex is necessary):
```
[
{'Race_No':1,'Horse_No':1,"Odd":8.4,'status':0,'updatetime':224014},
{'Race_No':1,'Horse_No':2,"Odd":33,'status':0,'updatetime':224014},
{'Race_No':1,'Horse_No':3,"Odd":9.4,'status':0,'updatetime':224014},
{'Race_No':2,'Horse_No':1,"Odd":15,'status':0,'updatetime':224014},
{'Race_No':2,'Horse_No':2,"Odd":3.3,'status':1,'updatetime':224014},
{'Race_No':2,'Horse_No':3,"Odd":4.2,'status':0,'updatetime':224014},
{'Race_No':2,'Horse_No':4,"Odd":5.7,'status':0,'updatetime':224014},
{'Race_No':2,'Horse_No':5,"Odd":5.9,'status':0,'updatetime':224014},
{'Race_No':2,'Horse_No':6,"Odd":22,'status':0,'updatetime':224014},
{'Race_No':3,'Horse_No':1,"Odd":17,'status':0,'updatetime':224014},
{'Race_No':3,'Horse_No':2,"Odd":7.4,'status':0,'updatetime':224014},
{'Race_No':3,'Horse_No':3,"Odd":27,'status':0,'updatetime':224014},
{'Race_No':4,'Horse_No':1,"Odd":14,'status':0,'updatetime':224014},
{'Race_No':4,'Horse_No':2,"Odd":7.8,'status':0,'updatetime':224014},
{'Race_No':4,'Horse_No':3,"Odd":5.9,'status':0,'updatetime':224014},
{'Race_No':4,'Horse_No':4,"Odd":23,'status':0,'updatetime':224014},
{'Race_No':4,'Horse_No':5,"Odd":4.0,'status':1,'updatetime':224014}
]
``` | 2022/10/18 | [
"https://Stackoverflow.com/questions/74113894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19998897/"
] | firstly, `sum` is a protected keyword since it is the list sum function, so don't call any variables "sum".
to split the string, try:
```py
sum=0
sq=""
for i in range (0+2,1000+1,2):
sum+=i
if i<1000:
sq=sq+str(i)+", "
else:
sq=sq+str(i)
if i % 40 == 0:
sq += "\n"
print(sq, end="\n")
print("Sum of all even numbers within 1 and 1000 =",sum)
``` | #### solution
```py
sum=0
for i in range(2,1001,2):
sum+=i
if i%20 == 2: print("\n{}".format(i),end="") # print new line per 20 numbers
else: print(", {}".format(i),end="")
print("\nSum of all even numbers within 1 and 1000 =",sum)
```
* Output:
```bash
2, 4, 6, 8, 10, 12, 14, 16, 18, 20
22, 24, 26, 28, 30, 32, 34, 36, 38, 40
...
962, 964, 966, 968, 970, 972, 974, 976, 978, 980
982, 984, 986, 988, 990, 992, 994, 996, 998, 1000
Sum of all even numbers within 1 and 1000 = 250500
```
---
#### another solution with better run time performance
```py
print("".join(["\n"+str(i) if i%20==2 else ", "+str(i) for i in range(2,1001,2)]))
print("\nSum of all even numbers within 1 and 1000 =",sum(range(2,1001,2)))
``` | 516 |
44,211,461 | What is the fastest way to combine 100 CSV files with headers into one with the following setup:
1. The total size of files is 200 MB. (The size is reduced to make the
computation time visible)
2. The files are located on an SSD with a maximum speed of 240 MB/s.
3. The CPU has 4 cores so multi-threading and multiple processes are
allowed.
4. There exists only one node (important for Spark)
5. The available memory is 15 GB. So the files easily fit into memory.
6. The OS is Linux (Debian Jessie)
7. The computer is actually a n1-standard-4 instance in Google Cloud.
(The detailed setup was included to make the scope of the question more specific. The changes were made according to [the feedback here](https://meta.stackoverflow.com/questions/349793/why-is-benchmarking-a-specific-task-in-multiple-languages-considered-too-broad))
File 1.csv:
```
a,b
1,2
```
File 2.csv:
```
a,b
3,4
```
Final out.csv:
```
a,b
1,2
3,4
```
According to my benchmarks the fastest from all the proposed methods is pure python. Is there any faster method?
**Benchmarks (Updated with the methods from comments and posts):**
```
Method Time
pure python 0.298s
sed 1.9s
awk 2.5s
R data.table 4.4s
R data.table with colClasses 4.4s
Spark 2 40.2s
python pandas 1min 11.0s
```
Versions of tools:
```
sed 4.2.2
awk: mawk 1.3.3 Nov 1996
Python 3.6.1
Pandas 0.20.1
R 3.4.0
data.table 1.10.4
Spark 2.1.1
```
**Code in Jupyter notebooks:**
sed:
```
%%time
!head temp/in/1.csv > temp/merged_sed.csv
!sed 1d temp/in/*.csv >> temp/merged_sed.csv
```
Pure Python all binary read-write with undocumented behavior of "next":
```
%%time
with open("temp/merged_pure_python2.csv","wb") as fout:
# first file:
with open("temp/in/1.csv", "rb") as f:
fout.write(f.read())
# now the rest:
for num in range(2,101):
with open("temp/in/"+str(num)+".csv", "rb") as f:
next(f) # skip the header
fout.write(f.read())
```
awk:
```
%%time
!awk 'NR==1; FNR==1{{next}} 1' temp/in/*.csv > temp/merged_awk.csv
```
R data.table:
```
%%time
%%R
filenames <- paste0("temp/in/",list.files(path="temp/in/",pattern="*.csv"))
files <- lapply(filenames, fread)
merged_data <- rbindlist(files, use.names=F)
fwrite(merged_data, file="temp/merged_R_fwrite.csv", row.names=FALSE)
```
R data.table with colClasses:
```
%%time
%%R
filenames <- paste0("temp/in/",list.files(path="temp/in/",pattern="*.csv"))
files <- lapply(filenames, fread,colClasses=c(
V1="integer",
V2="integer",
V3="integer",
V4="integer",
V5="integer",
V6="integer",
V7="integer",
V8="integer",
V9="integer",
V10="integer"))
merged_data <- rbindlist(files, use.names=F)
fwrite(merged_data, file="temp/merged_R_fwrite.csv", row.names=FALSE)
```
Spark (pyspark):
```
%%time
df = spark.read.format("csv").option("header", "true").load("temp/in/*.csv")
df.coalesce(1).write.option("header", "true").csv("temp/merged_pyspark.csv")
```
Python pandas:
```
%%time
import pandas as pd
interesting_files = glob.glob("temp/in/*.csv")
df_list = []
for filename in sorted(interesting_files):
df_list.append(pd.read_csv(filename))
full_df = pd.concat(df_list)
full_df.to_csv("temp/merged_pandas.csv", index=False)
```
Data was generated by:
```
%%R
df=data.table(replicate(10,sample(0:9,100000,rep=TRUE)))
for (i in 1:100){
write.csv(df,paste0("temp/in/",i,".csv"), row.names=FALSE)
}
``` | 2017/05/26 | [
"https://Stackoverflow.com/questions/44211461",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3596337/"
] | According to the benchmarks in the question the fastest method is pure Python with undocumented "next()" function behavior with binary files. The method was proposed by [Stefan Pochmann](https://stackoverflow.com/users/1672429/stefan-pochmann)
Benchmarks:
**Benchmarks (Updated with the methods from comments and posts):**
```
Method Time
pure python 0.298s
sed 1.9s
awk 2.5s
R data.table 4.4s
R data.table with colClasses 4.4s
Spark 2 40.2s
python pandas 1min 11.0s
```
Versions of tools:
```
sed 4.2.2
awk: mawk 1.3.3 Nov 1996
Python 3.6.1
Pandas 0.20.1
R 3.4.0
data.table 1.10.4
Spark 2.1.1
```
Pure Python code:
```
with open("temp/merged_pure_python2.csv","wb") as fout:
# first file:
with open("temp/in/1.csv", "rb") as f:
fout.write(f.read())
# now the rest:
for num in range(2,101):
with open("temp/in/"+str(num)+".csv", "rb") as f:
next(f) # skip the header
fout.write(f.read())
``` | `sed` is probably the fastest. I would also propose an `awk` alternative
```
awk 'NR==1; FNR==1{next} 1' file* > output
```
prints the first line from the first file, then skips all other first lines from the rest of the files.
Timings:
I tried 10,000 lines long 100 files each around 200MB (not sure). Here is a worst timing on my server.
```
real 0m0.429s
user 0m0.360s
sys 0m0.068s
```
server specs (little monster)
```
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping: 2
CPU MHz: 2394.345
BogoMIPS: 4789.86
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 15360K
NUMA node0 CPU(s): 0-11
``` | 519 |
42,544,150 | I am using python-3.x, and I am trying to do mutation on a binary string that will flip one bit of the elements from 0 to 1 or 1 to 0 by random, I tried some methods but didn't work I don't know where is the problem:
```
x=[0, 0, 0, 0, 0]
def mutation (x, muta):
for i in range(len(x)):
if random.random() < muta:
x[i] = type(x[i])(not x[i])
return x,
print (x)
```
The output for example should be x=[0, 0, 0, 1, 0] or x=[1, 0, 0, 0, 0] and so on....
Also, I tried this one:
```
MUTATION_RATE = 0.5
CHROMO_LEN = 6
def mutate(x):
x = ""
for i in range(CHROMO_LEN):
if (random.random() < MUTATION_RATE):
if (x[i] == 1):
x += 0
else:
x += 1
else:
x += x[i]
return x
print(x)
```
please any suggestion or advice will be appreciated | 2017/03/01 | [
"https://Stackoverflow.com/questions/42544150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7632116/"
] | If you want that your object don't pass from other object then, you should use colldier without [**isTrigger**](https://docs.unity3d.com/ScriptReference/Collider-isTrigger.html) check (isTrigger should be false) and use [OnCollisionEnter](https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnCollisionEnter.html) Event instead OnTriggerEnter. | you just using `Bounds` insteads of making many collider. | 520 |
18,014,633 | Consider the following simple python code:
```
f=open('raw1', 'r')
i=1
for line in f:
line1=line.split()
for word in line1:
print word,
print '\n'
```
In the first for loop i.e "for line in f:", how does python know that I want to read a line and not a word or a character?
The second loop is clearer as line1 is a list. So the second loop will iterate over the list elemnts. | 2013/08/02 | [
"https://Stackoverflow.com/questions/18014633",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2625987/"
] | Python has a notation of what are called "iterables". They're things that know how to let you traverse some data they hold. Some common iterators are lists, sets, dicts, pretty much every data structure. Files are no exception to this.
The way things become iterable is by defining a method to return an object with a `next` method. This `next` method is meant to be called repeatedly and return the next piece of data each time. The `for foo in bar` loops actually are just calling the `next` method repeatedly behind the scenes.
For files, the `next` method returns lines, that's it. It doesn't "know" that you want lines, it's just always going to return lines. The reason for this is that ~50% of cases involving file traversal are by line, and if you want words,
```
for word in (word for line in f for word in line.split(' ')):
...
```
works just fine. | In python the **for..in** syntax is used over iterables (elements tht can be iterated upon). For a file object, the iterator is the file itself.
Please refer [here](http://docs.python.org/release/2.5.2/lib/bltin-file-objects.html) to the documentation of **next()** method - excerpt pasted below:
>
> A file object is its own iterator, for example iter(f) returns f
> (unless f is closed). When a file is used as an iterator, typically in
> a for loop (for example, for line in f: print line), the next() method
> is called repeatedly. This method returns the next input line, or
> raises StopIteration when EOF is hit when the file is open for reading
> (behavior is undefined when the file is open for writing). In order to
> make a for loop the most efficient way of looping over the lines of a
> file (a very common operation), the next() method uses a hidden
> read-ahead buffer. As a consequence of using a read-ahead buffer,
> combining next() with other file methods (like readline()) does not
> work right. However, using seek() to reposition the file to an
> absolute position will flush the read-ahead buffer. New in version
> 2.3.
>
>
> | 521 |
32,127,602 | After instantiating a deck (`deck = Deck()`), calling `deck.show_deck()` just prints out "two of diamonds" 52 times. The 'copy' part is as per [this answer](https://stackoverflow.com/questions/2196956/add-an-object-to-a-python-list), but doesn't seem to help. Any suggestions?
```
import copy
from card import Card
class Deck:
card_ranks = ['ace','king','queen','jack','ten','nine','eight','seven','six','five','four','three','two']
card_suites = ['clubs','hearts','spades','diamonds']
deck = []
def __init__(self):
#create a deck of 52 cards
for suite in Deck.card_suites:
for rank in Deck.card_ranks:
Deck.deck.append(copy.deepcopy(Card(card_rank=rank, card_suite=suite)))
def show_deck(self):
for item in Deck.deck:
print item.get_name()
```
Card:
```
class Card:
card_name = ''
def __init__(self, card_rank, card_suite):
self.card_rank = card_rank.lower()
self.card_suite = card_suite.lower()
Card.card_name = card_rank + " of " + card_suite
def get_name(self):
return Card.card_name
``` | 2015/08/20 | [
"https://Stackoverflow.com/questions/32127602",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2372996/"
] | The problem here is that the `Card` class has a name variable which is shared with all instances of the `Card` class.
When you have:
```
class Card:
card_name = ''
```
This means that all `Card` objects will have the same name (`card_name`) which is almost surely not what you want.
You have to make the name be part of the instance instead like so:
```
class Card:
def __init__(self, card_rank, card_suite):
self.card_rank = card_rank.lower()
self.card_suite = card_suite.lower()
self.card_name = card_rank + " of " + card_suite
def get_name(self):
return self.card_name
```
You will find that the `deepcopy` is not needed, nor was it ever needed, but it does show you that `deepcopy` will not allow you to keep different states of class variables.
Further I would recommend you change `Card` to have it's own `__str__` method if you want to print it out:
```
class Card:
def __init__(self, card_rank, card_suite):
self.card_rank = card_rank.lower()
self.card_suite = card_suite.lower()
def __str__(self):
return "{0} of {1}".format(card_rank, card_suit)
```
This uses the Python language itself to print the class and has the upside that your class will now work properly in print statements and in conversions to strings. So instead of:
```
print some_card.get_name()
```
you could do
```
print some_card
``` | To expand on what shuttle87 said:
```
class Card:
card_name = ''
```
makes `card_name` a static variable (shared between all instances of that class)
Once you make the variable non-static (by using `self.card_name` in the `__init__` method) you won't have to worry about the copy part as each instance of the card class will have it's own unique name
On that note, the `deck` in Deck is also static in your code.
```
from card import Card
class Deck:
# these 2 can be static, they never change between copies of the deck class
card_ranks = ['ace','king','queen','jack','ten','nine','eight','seven','six','five','four','three','two']
card_suites = ['clubs','hearts','spades','diamonds']
def __init__(self):
# this shouldn't be static since you might want to shuffle them
# or do other things that make them unique for each deck
self.cards = []
for suite in Deck.card_suites:
for rank in Deck.card_ranks:
self.cards.append(Card(rank, suite))
def show_deck(self):
for item in self.cards:
print item
```
---
```
class Card:
def __init__(self, rank, suite):
self.rank = rank
self.suite = suite
def __str__(self):
return self.rank + ' of ' + self.suite
```
---
```
#! python2
from deck import Deck
def main():
deck = Deck()
deck.show_deck()
if __name__ == '__main__':
main()
```
---
```
ace of clubs
king of clubs
queen of clubs
jack of clubs
...
``` | 522 |
47,701,629 | Is there a way to run an `ipython` like debug console in VC Code that would allow tab completion and other sort of things? | 2017/12/07 | [
"https://Stackoverflow.com/questions/47701629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2058333/"
] | Meanwhile, I have become a big fan of [PDB++](https://pypi.org/project/pdbpp/) debugger for python. It works like the iPython CLI, so I think the question has become obsolete specifically for me, but still may have some value for others. | It seems this is a desired feature for VS Code but not yet implemented. See this post: <https://github.com/DonJayamanne/vscodeJupyter/issues/19>
I'm trying to see if one could use the config file of VS Code to define an ipython debug configuration e.g.:
`{
"name": "ipython",
"type": "python",
"request": "launch",
"program": "${file}",
"pythonPath": "/Users/tsando/anaconda3/bin/ipython"
}`
but so far no luck. You can see my post in the above link. | 523 |
57,689,479 | I am converting pdfs to text and got this code off a previous post:
[Extracting text from a PDF file using PDFMiner in python?](https://stackoverflow.com/questions/26494211/extracting-text-from-a-pdf-file-using-pdfminer-in-python)
When I print(text) it has done exactly what I want, but then I need to save this to a text file, which is when I get the above error.
The code follows exactly the first answer on the linked question. Then I:
```
text = convert_pdf_to_txt("GMCA ECON.pdf")
file = open('GMCAECON.txt', 'w', 'utf-8')
file.write(text)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-ebc6b7708d93> in <module>
----> 1 file = open('GMCAECON.txt', 'w', 'utf-8')
2 file.write(text)
TypeError: an integer is required (got type str)
```
I'm afraid it's probably something really simple but I can't figure it out.
I want it to write the text to a text file with the same name, which I can then do further analysis on. Thanks. | 2019/08/28 | [
"https://Stackoverflow.com/questions/57689479",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11759292/"
] | The problem is your third argument. Third positional argument accepted by `open` is buffering, not encoding.
Call `open` like this:
```
open('GMCAECON.txt', 'w', encoding='utf-8')
```
and your problem should go away. | when you do `file = open('GMCAECON.txt', 'w', 'utf-8')` you pass positional arguments to `open()`. Third argument you pass is `encoding`, however the third argument it expect is `buffering`. You need to pass `encoding` as keyword argument, e.g.
`file = open('GMCAECON.txt', 'w', encoding='utf-8')`
Note that it's much better is to use `with` context manager
```
with open('GMCAECON.txt', 'w', encoding='utf-8') as f:
f.write(text)
``` | 531 |
58,466,174 | We would like to remove the key and the values from a YAML file using python, for example
```
- misc_props:
- attribute: tmp-1
value: 1
- attribute: tmp-2
value: 604800
- attribute: tmp-3
value: 100
- attribute: tmp-4
value: 1209600
name: temp_key1
attr-1: 20
attr-2: 1
- misc_props:
- attribute: tmp-1
value: 1
- attribute: tmp-2
value: 604800
- attribute: tmp-3
value: 100
- attribute: tmp-4
value: 1209600
name: temp_key2
atrr-1: 20
attr-2: 1
```
From the above example we would like to delete the whole bunch of property and where key name matches the value, for example if we want to delete name: temp\_key2 the newly created dictionary after delete will be like below:-
```
- misc_props:
- attribute: tmp-1
value: 1
- attribute: tmp-2
value: 604800
- attribute: tmp-3
value: 100
- attribute: tmp-4
value: 1209600
name: temp_key1
attr-1: 20
attr-2: 1
``` | 2019/10/19 | [
"https://Stackoverflow.com/questions/58466174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5596456/"
] | It is not sufficient to delete a key-value pair to get your desired output.
```
import sys
import ruamel.yaml
yaml = ruamel.yaml.YAML()
with open('input.yaml') as fp:
data = yaml.load(fp)
del data[1]['misc_props']
yaml.dump(data, sys.stdout)
```
as that gives:
```
- misc_props:
- attribute: tmp-1
value: 1
- attribute: tmp-2
value: 604800
- attribute: tmp-3
value: 100
- attribute: tmp-4
value: 1209600
name: temp_key1
attr-1: 20
attr-2: 1
- name: temp_key2
atrr-1: 20
attr-2: 1
```
What you need to do is delete one of the items of the sequence that is
the root of the YAML structure:
```
del data[1]
yaml.dump(data, sys.stdout)
```
which gives:
```
- misc_props:
- attribute: tmp-1
value: 1
- attribute: tmp-2
value: 604800
- attribute: tmp-3
value: 100
- attribute: tmp-4
value: 1209600
name: temp_key1
attr-1: 20
attr-2: 1
``` | Did you try using the yaml module?
```
import yaml
with open('./old.yaml') as file:
old_yaml = yaml.full_load(file)
#This is the part of the code which filters out the undesired keys
new_yaml = filter(lambda x: x['name']!='temp_key2', old_yaml)
with open('./new.yaml', 'w') as file:
documents = yaml.dump(new_yaml, file)
``` | 532 |
11,211,650 | I'm using Python 2.7 on Windows and I am writing a script that uses both time and datetime modules. I've done this before, but python seems to be touchy about having both modules loaded and the methods I've used before don't seem to be working. Here are the different syntax I've used and the errors I am currently getting.
First I tried:
```
from datetime import *
from time import *
...
checktime = datetime.today() - timedelta(days=int(2))
checktime = checktime.timetuple()
...
filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn)
file = webgatelogdir + '/' + fn
filetime = localtime(filetimesecs)
...
else: time.sleep(60)
```
ERROR:
`else: time.sleep(60) AttributeError: 'builtin_function_or_method' object has no attribute 'sleep'`
Then I tried:
```
from datetime import *
from time import *
...
checktime = datetime.today() - timedelta(days=int(2))
checktime = checktime.timetuple()
...
filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn)
file = webgatelogdir + '/' + fn
filetime = localtime(filetimesecs)
...
#else: time.sleep(60) # comment out time.sleep statement
```
and I got no errors, but no sleep delay either.
Next I tried:
```
from datetime import *
import time
...
checktime = datetime.today() - timedelta(days=int(2))
checktime = checktime.timetuple()
...
filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn)
file = webgatelogdir + '/' + fn
filetime = localtime(filetimesecs)
...
#else: time.sleep(60) # comment out time.sleep statement
```
ERROR:
`filetime = localtime(filetimesecs) NameError: name 'localtime' is not defined`
Another modification and I tried this:
```
import time
import datetime
...
checktime = datetime.today() - timedelta(days=int(2))
checktime = checktime.timetuple()
...
filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn)
file = webgatelogdir + '/' + fn
filetime = localtime(filetimesecs)
...
#else: time.sleep(60) # comment out time.sleep statement
```
ERROR
`checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'`
Finally, I tried this:
```
import time
from datetime import *
...
checktime = datetime.today() - timedelta(days=int(2))
checktime = checktime.timetuple()
...
filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn)
file = webgatelogdir + '/' + fn
filetime = localtime(filetimesecs)
...
#else: time.sleep(60) # comment out time.sleep statement
```
ERROR:
`checktime = datetime.today() - timedelta(days=int(2))
AttributeError: 'module' object has no attribute 'today'`
So I'm not sure how to get the two modules to play nicely. Or I need another method to put a delay in the script.
Suggestions? Or pointers to mistakes that I made?
Thanks. | 2012/06/26 | [
"https://Stackoverflow.com/questions/11211650",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1070061/"
] | You can use **as** while importing time.
```
import time as t
from datetime import datetime
...
t.sleep(2)
``` | Don't use `from ... import *` β this is a convenience syntax for interactive use, and leads to confusion in scripts.
Here' a version that should work:
```
import time
import datetime
...
checktime = datetime.datetime.today() - datetime.timedelta(days=int(2))
checktime = checktime.timetuple()
...
filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn)
file = webgatelogdir + '/' + fn
filetime = time.localtime(filetimesecs)
...
#else: time.sleep(60) # comment out time.sleep statement
```
When importing the modules using `import <modulename>`, you of course need to use fully qualified names for all names in these modules | 533 |
19,034,959 | I need to install some modules for python on Ubuntu Linux 12.04. I want pygame and livewires but I'm not sure how to install them.
I have the py file for livewires, which has been specially edited (from a book I'm reading) and I want to install it but I'm not sure how to, I also want to install pygame. | 2013/09/26 | [
"https://Stackoverflow.com/questions/19034959",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2765940/"
] | There are two nice ways to install Python packages on Ubuntu (and similar Linux systems):
```
sudo apt-get install python-pygame
```
to use the Debian/Ubuntu package manager APT. This only works for packages that are shipped by Ubuntu, unless you change the APT configuration, and in particular there seems to be no PyGame package for Python 3.
The other option is to use PIP, the Python package manager:
```
sudo apt-get install python3-pip
```
to install it, then
```
sudo pip3 install pygame
```
to fetch the PyGame package from [PyPI](https://pypi.python.org/pypi) and install it for Python 3. PIP has some limitations compared to APT, but it does always fetch the latest version of a package instead of the one that the Ubuntu packagers have chosen to ship.
**EDIT**: to repeat what I said in the comment, `pip3` isn't in Ubuntu 12.04 yet. It can still be installed with
```
sudo apt-get install python3-setuptools
sudo easy_install3 pip
sudo apt-get purge python-pip
```
After this, `pip` is the Python 3 version of PIP, instead of `pip3`. The last command is just for safety; there might be a Python 2 PIP installed as `/usr/bin/pip`. | You can use several approaches:
1 - Download the package by yourself. This is what I use the most. If the package follows the specifications, you should be able to install it by moving to its uncompressed folder and typing in the console:
```
python setup.py build
python setup.py install
```
2 - Use pip. Pip is pretty straightforward. In the console, you have to type:
```
pip install package_name
```
You can obtain pip here <https://pypi.python.org/pypi/pip> and install it with method 1
One thing to note: if you aren't using a virtualenv, you'll have to add sudo before those commands (not recommended) | 543 |
54,396,228 | I am trying to build a chat app with Django but when I try to run it I get this error
```
No application configured for scope type 'websocket'
```
my routing.py file is
```
from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter , URLRouter
import chat.routing
application = ProtocolTypeRouter({
# (http->django views is added by default)
'websocket':AuthMiddlewareStack(
URLRouter(
chat.routing.websocket_urlpatterns
)
),
})
```
my settings.py is
```
ASGI_APPLICATION = 'mychat.routing.application'
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [('127.0.0.1', 6379)],
},
},
}
```
when I open my URL in 2 tabs I should be able to see the messages that I posted in the first tab appeared in the 2nd tab but I am getting an error
```
[Failure instance: Traceback: <class 'ValueError'>: No application configured for scope type 'websocket'
/home/vaibhav/.local/lib/python3.6/site-packages/autobahn/websocket/protocol.py:2801:processHandshake
/home/vaibhav/.local/lib/python3.6/site-packages/txaio/tx.py:429:as_future
/home/vaibhav/.local/lib/python3.6/site-packages/twisted/internet/defer.py:151:maybeDeferred
/home/vaibhav/.local/lib/python3.6/site-packages/daphne/ws_protocol.py:82:onConnect
--- <exception caught here> ---
/home/vaibhav/.local/lib/python3.6/site-packages/twisted/internet/defer.py:151:maybeDeferred
/home/vaibhav/.local/lib/python3.6/site-packages/daphne/server.py:198:create_application
/home/vaibhav/.local/lib/python3.6/site-packages/channels/staticfiles.py:41:__call__
/home/vaibhav/.local/lib/python3.6/site-packages/channels/routing.py:61:__call__
]
WebSocket DISCONNECT /ws/chat/lobby/ [127.0.0.1:34724]
```
I couldn't find a duplicate of this question on stackoverflow | 2019/01/28 | [
"https://Stackoverflow.com/questions/54396228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10974783/"
] | Your xpath can be more specific, would suggest you go with incremental approach, first try with:
```
driver.find_element_by_xpath('//*[@id="form1"]//div[@class="screen-group-content"]')
```
If above returns True
```
driver.find_element_by_xpath('//*[@id="form1"]//div[@class="screen-group-content"]//table[@class="asureTable"]')
```
If above is true too; then you could get rows and data by index on above Xpath.
Also, do check for any frames in the upper hierarchy of the HTML snippet that has been attached in your post. | did you have try using regular expressions?
Using **Selenium**:
```
import re
from selenium import webdriver
#n = webdriver.Firefox() or n.webdriver.Chrome()
n.get_url( your_url )
html_source_code = str(n.page_source)
# Using a regular expression
# The element that you want to fetch/collect
# will be inside of the 'values' variable
values = re.findall( r'title=\"View Check Detail\"\>(.+)\</td>', html_source_code )
```
**Update:** If the content is inside of a **iframe**, using **selenium + Chrome driver** you can do this:
```
from selenium import webdriver
from selenium.webdriver.chrome import options
o = options.Options()
o.headless = True
n = webdriver.Chrome(options=o)
n.get_url( your_url )
links = n.find_elements_by_tag_name("iframe")
outer = [ e.get_attribute("src") for e in links]
# In the best case outer will be a list o strings,
# each outer's element contain the values of the src attribute.
# Compute the correct element inside of outer
n.get_url(correct_outer_element)
# This will make a 'new' html code.
# Create a new xpath and fetch the data!
``` | 552 |
21,318,968 | I have a textfield in a database that contains the results of a python `json.dumps(list_instance)` operation. As such, the internal fields have a `u'` prefix, and break the browser's `JSON.parse()` function.
An example of the JSON string is
```
"density": "{u'Penobscot': 40.75222856500098, u'Sagadahoc':
122.27083333333333, u'Lincoln': 67.97977755308392, u'Kennebec':
123.12237174095878, u'Waldo': 48.02117802779616, u'Cumberland':
288.9285325791363, u'Piscataquis': 3.9373586457405247, u'Hancock':
30.698239582715903, u'Washington': 12.368718341168325, u'Aroostook':
10.827378163074039, u'York': 183.47612497543722, u'Franklin':
16.89330963710371, u'Oxford': 25.171240748402518, u'Somerset':
12.425648288323485, u'Knox': 108.48302300109529, u'Androscoggin':
208.75502815768303}"
```
What I'd like to do is replace those occurrences of `u'` with a `'`(single-quote). I've tried
```
function renderValues(data){
var pop = JSON.parse(data.density.replace(/u'/g, "'"));
}
```
but I'm always getting a `unexpected token '` exception. Since many of the possible key fields may contain a `u`, it is not feasable to just delete that character. How can I find all instances of `u'` and replace with `'` without getting the exception? | 2014/01/23 | [
"https://Stackoverflow.com/questions/21318968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/214892/"
] | Updated solution: `replace(/u'/g, "'"));` => `replace(/u'(?=[^:]+')/g, "'"));`.
Tested with the following:
`"{u'Penobscot': 40.75222856500098, u'Sagadahoc': 122.27083333333333, u'Lincoln': 67.97977755308392, u'Kennebec': 123.12237174095878, u'Waldo': 48.02117802779616, u'Cumberland': 288.9285325791363, u'Piscataquis': 3.9373586457405247, u'Hancock': 30.698239582715903, u'Timbuktu': 12.368718341168325, u'Aroostook': 10.827378163074039, u'York': 183.47612497543722, u'Franklin': 16.89330963710371, u'Oxford': 25.171240748402518, u'Somerset': 12.425648288323485, u'Knox': 108.48302300109529, u'Androscoggin': 208.75502815768303}".replace(/u'(?=[^:]+')/g, "'");`
results in:
`"{'Penobscot': 40.75222856500098, 'Sagadahoc': 122.27083333333333, 'Lincoln': 67.97977755308392, 'Kennebec': 123.12237174095878, 'Waldo': 48.02117802779616, 'Cumberland': 288.9285325791363, 'Piscataquis': 3.9373586457405247, 'Hancock': 30.698239582715903, 'Timbuktu': 12.368718341168325, 'Aroostook': 10.827378163074039, 'York': 183.47612497543722, 'Franklin': 16.89330963710371, 'Oxford': 25.171240748402518, 'Somerset': 12.425648288323485, 'Knox': 108.48302300109529, 'Androscoggin': 208.75502815768303}"` | a little bit old in the answer but if there is no way to change or access the server response try with:
```
var strExample = {'att1':u'something with u'};
strExample.replace(/u'[\}|\,]/g, "Γ§'").replace(/u'/g, "'").replace(/Γ§'/g, "u'");
//{'att1':'something with u'};
```
The first replace will handle the u' that are in the trailing part of the string in the object changing it to 'Γ§' character
then removing the u from the phyton unicode and finally change it to u' like the original | 555 |
20,332,359 | Im trying to use python's default logging module in a multiprocessing scenario.
I've read:
1. [Python MultiProcess, Logging, Various Classes](https://stackoverflow.com/questions/17582155/python-multiprocess-logging-various-classes)
2. [Logging using multiprocessing](https://stackoverflow.com/questions/10665090/logging-using-multiprocessing)
and other multiple posts about multiprocessing, logging, python classes and such.
After all this reading I've came to this piece of code I cannot make it properly run which uses python's logutils QueueHandler:
```
import sys
import logging
from logging import INFO
from multiprocessing import Process, Queue as mpQueue
import threading
import time
from logutils.queue import QueueListener, QueueHandler
class Worker(Process):
def __init__(self, n, q):
super(Worker, self).__init__()
self.n = n
self.queue = q
self.qh = QueueHandler(self.queue)
self.root = logging.getLogger()
self.root.addHandler(self.qh)
self.root.setLevel(logging.DEBUG)
self.logger = logging.getLogger("W%i"%self.n)
def run(self):
self.logger.info("Worker %i Starting"%self.n)
for i in xrange(10):
self.logger.log(INFO, "testing %i"%i)
self.logger.log(INFO, "Completed %i"%self.n)
def listener_process(queue):
while True:
try:
record = queue.get()
if record is None:
break
logger = logging.getLogger(record.name)
logger.handle(record)
except (KeyboardInterrupt, SystemExit):
raise
except:
import sys, traceback
print >> sys.stderr, 'Whoops! Problem:'
traceback.print_exc(file=sys.stderr)
if __name__ == "__main__":
mpq = mpQueue(-1)
root = logging.getLogger()
h = logging.StreamHandler()
f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
h.setFormatter(f)
root.addHandler(h)
l = logging.getLogger("Test")
l.setLevel(logging.DEBUG)
listener = Process(target=listener_process,
args=(mpq,))
listener.start()
workers=[]
for i in xrange(1):
worker = Worker(i, mpq)
worker.daemon = True
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
mpq.put_nowait(None)
listener.join()
for i in xrange(10):
l.info("testing %i"%i)
print "Finish"
```
If the code is executed, the output somehow repeats lines like:
```
2013-12-02 16:44:46,002 Worker-2 W0 INFO Worker 0 Starting
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 0
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 1
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 2
2013-12-02 16:44:46,002 Worker-2 W0 INFO Worker 0 Starting
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 3
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 0
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 1
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 4
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 2
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 3
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 5
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 4
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 6
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 5
2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 7
2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 6
2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 8
2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 7
2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 9
2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 8
2013-12-02 16:44:46,004 Worker-2 W0 INFO Completed 0
2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 9
2013-12-02 16:44:46,004 Worker-2 W0 INFO Completed 0
2013-12-02 16:44:46,005 MainProcess Test INFO testing 0
2013-12-02 16:44:46,005 MainProcess Test INFO testing 1
2013-12-02 16:44:46,005 MainProcess Test INFO testing 2
2013-12-02 16:44:46,005 MainProcess Test INFO testing 3
2013-12-02 16:44:46,005 MainProcess Test INFO testing 4
2013-12-02 16:44:46,005 MainProcess Test INFO testing 5
2013-12-02 16:44:46,006 MainProcess Test INFO testing 6
2013-12-02 16:44:46,006 MainProcess Test INFO testing 7
2013-12-02 16:44:46,006 MainProcess Test INFO testing 8
2013-12-02 16:44:46,006 MainProcess Test INFO testing 9
Finish
```
In other questios it's suggested that the handler gets added more than once, but, as you can see, I only add the streamhanlder once in the **main** method.
I've already tested embedding the **main** method into a class with the same result.
EDIT:
as @max suggested (or what I believe he said) I've modified the code of the worker class as:
```
class Worker(Process):
root = logging.getLogger()
qh = None
def __init__(self, n, q):
super(Worker, self).__init__()
self.n = n
self.queue = q
if not self.qh:
Worker.qh = QueueHandler(self.queue)
Worker.root.addHandler(self.qh)
Worker.root.setLevel(logging.DEBUG)
self.logger = logging.getLogger("W%i"%self.n)
print self.root.handlers
def run(self):
self.logger.info("Worker %i Starting"%self.n)
for i in xrange(10):
self.logger.log(INFO, "testing %i"%i)
self.logger.log(INFO, "Completed %i"%self.n)
```
With the same results, Now the queue handler is not added again and again but still there are duplicate log entries, even with just one worker.
EDIT2:
I've changed the code a little bit. I changed the listener process and now use a QueueListener (that's what I intended in the begining anyway), moved the main code to a class.
```
import sys
import logging
from logging import INFO
from multiprocessing import Process, Queue as mpQueue
import threading
import time
from logutils.queue import QueueListener, QueueHandler
root = logging.getLogger()
added_qh = False
class Worker(Process):
def __init__(self, logconf, n, qh):
super(Worker, self).__init__()
self.n = n
self.logconf = logconf
# global root
global added_qh
if not added_qh:
added_qh = True
root.addHandler(qh)
root.setLevel(logging.DEBUG)
self.logger = logging.getLogger("W%i"%self.n)
#print root.handlers
def run(self):
self.logger.info("Worker %i Starting"%self.n)
for i in xrange(10):
self.logger.log(INFO, "testing %i"%i)
self.logger.log(INFO, "Completed %i"%self.n)
class Main(object):
def __init__(self):
pass
def start(self):
mpq = mpQueue(-1)
qh = QueueHandler(mpq)
h = logging.StreamHandler()
ql = QueueListener(mpq, h)
#h.setFormatter(f)
root.addHandler(qh)
l = logging.getLogger("Test")
l.setLevel(logging.DEBUG)
workers=[]
for i in xrange(15):
worker = Worker(logconf, i, qh)
worker.daemon = True
worker.start()
workers.append(worker)
for worker in workers:
print "joining worker: {}".format(worker)
worker.join()
mpq.put_nowait(None)
ql.start()
# listener.join()
for i in xrange(10):
l.info("testing %i"%i)
if __name__ == "__main__":
x = Main()
x.start()
time.sleep(10)
print "Finish"
```
Now it **mostly** works until I reach a certain number of workers (~15) when for some reason the Main class get blocked in de join and the rest of the workers do nothing. | 2013/12/02 | [
"https://Stackoverflow.com/questions/20332359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3057996/"
] | I'm coming late, so you probably don't need the answer anymore. The problem comes from the fact that you already have a handler set in your main process, and in your worker you are adding another one. This means that in your worker process, two handlers are in fact managing your data, the one in pushing the log to queue, and the one writing to the stream.
You can fix this simply by adding an extra line `self.root.handlers = []` to your code. From your original code, the `__init__` method of the worker would look like this:
```
def __init__(self, n, q):
super(Worker, self).__init__()
self.n = n
self.queue = q
self.qh = QueueHandler(self.queue)
self.root = logging.getLogger()
self.root.handlers = []
self.root.addHandler(self.qh)
self.root.setLevel(logging.DEBUG)
self.logger = logging.getLogger("W%i"%self.n)
```
The output now looks like this:
```
python workers.py
2016-05-12 10:07:02,971 Worker-2 W0 INFO Worker 0 Starting
2016-05-12 10:07:02,972 Worker-2 W0 INFO testing 0
2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 1
2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 2
2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 3
2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 4
2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 5
2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 6
2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 7
2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 8
2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 9
2016-05-12 10:07:02,973 Worker-2 W0 INFO Completed 0
Finish
``` | All your `Worker`s share the same root logger object (obtained in `Worker.__init__` -- the `getLogger` call always returns the same logger). However, every time you create a `Worker`, you add a handler (`QueueHandler`) to that logger.
So if you create 10 Workers, you will have 10 (identical) handlers on your root logger, which means output gets repeated 10 times.
Instead, you should make the logger a module attribute rather than an instance attribute, and configure it once at the module level -- not at the class level.
(actually, loggers should be configured once at the *program* level) | 559 |
32,209,155 | I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code:
```
confusion = {}
confusion[1] = 1
confusion['1'] = 2
confusion[1.0] = 4
sum = 0
for k in confusion:
sum += confusion[k]
print sum
```
The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature? | 2015/08/25 | [
"https://Stackoverflow.com/questions/32209155",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2401331/"
] | First of all: the behaviour is documented explicitly in the docs for the [hash](https://docs.python.org/3.5/library/functions.html#hash) function:
>
> **`hash(object)`**
>
>
> Return the hash value of the object (if it has one). Hash values are
> integers. They are used to quickly compare dictionary keys during a
> dictionary lookup. **Numeric values that compare equal have the same
> hash value (even if they are of different types, as is the case for `1`
> and `1.0`).**
>
>
>
Secondly, a limitation of hashing is pointed out in the docs for [`object.__hash__`](https://docs.python.org/3.5/reference/datamodel.html#object.__hash__)
>
> **`object.__hash__(self)`**
>
>
> Called by built-in function [`hash()`](https://docs.python.org/3.5/library/functions.html#hash) and for operations on members of
> hashed collections including `set`, `frozenset`, and `dict. __hash__()`
> should return an integer. **The only required property is that objects
> which compare equal have the same hash value;**
>
>
>
This is not unique to python. Java has the same caveat: if you implement `hashCode` then, in order for things to work correctly, you **must** implement it in such a way that: `x.equals(y)` implies `x.hashCode() == y.hashCode()`.
So, python decided that `1.0 == 1` holds, hence it's *forced* to provide an implementation for `hash` such that `hash(1.0) == hash(1)`. The side effect is that `1.0` and `1` act exactly in the same way as `dict` keys, hence the behaviour.
In other words the behaviour in itself doesn't have to be used or useful in any way. **It is necessary**. Without that behaviour there would be cases where you could accidentally overwrite a different key.
If we had `1.0 == 1` but `hash(1.0) != hash(1)` we could still have a *collision*. And if `1.0` and `1` collide, the `dict` will use equality to be sure whether they are the same key or not and *kaboom* the value gets overwritten even if you intended them to be different.
The only way to avoid this would be to have `1.0 != 1`, so that the `dict` is able to distinguish between them even in case of collision. But it was deemed more important to have `1.0 == 1` than to avoid the behaviour you are seeing, since you practically never use `float`s and `int`s as dictionary keys anyway.
Since python tries to hide the distinction between numbers by automatically converting them when needed (e.g. `1/2 -> 0.5`) it makes sense that this behaviour is reflected even in such circumstances. It's more consistent with the rest of python.
---
This behaviour would appear in *any* implementation where the matching of the keys is at least partially (as in a hash map) based on comparisons.
For example if a `dict` was implemented using a red-black tree or an other kind of balanced BST, when the key `1.0` is looked up the comparisons with other keys would return the same results as for `1` and so they would still act in the same way.
Hash maps require even more care because of the fact that it's the value of the hash that is used to find the entry of the key and comparisons are done only afterwards. So breaking the rule presented above means you'd introduce a bug that's quite hard to spot because at times the `dict` may seem to work as you'd expect it, and at other times, when the size changes, it would start to behave incorrectly.
---
Note that there *would* be a way to fix this: have a separate hash map/BST for each type inserted in the dictionary. In this way there couldn't be any collisions between objects of different type and how `==` compares wouldn't matter when the arguments have different types.
However this would complicate the implementation, it would probably be inefficient since hash maps have to keep quite a few free locations in order to have O(1) access times. If they become too full the performances decrease. Having multiple hash maps means wasting more space and also you'd need to first choose which hash map to look at before even starting the actual lookup of the key.
If you used BSTs you'd first have to lookup the type and the perform a second lookup. So if you are going to use many types you'd end up with twice the work (and the lookup would take O(log n) instead of O(1)). | Frankly, the opposite is dangerous! `1 == 1.0`, so it's not improbable to imagine that if you had them point to different keys and tried to access them based on an evaluated number then you'd likely run into trouble with it because the ambiguity is hard to figure out.
Dynamic typing means that the value is more important than what the technical type of something is, since the type is malleable (which *is* a very useful feature) and so distinguishing both `ints` and `floats` of the same value as distinct is unnecessary semantics that will only lead to confusion. | 562 |
8,758,354 | I've been using Python 3 for some months and I would like to create some GUIs. Does anyone know a good GUI Python GUI framework I could use for this?
I don't want to use [TkInter](http://wiki.python.org/moin/TkInter) because I don't think it's very good. I also don't want to use [PyQt](http://wiki.python.org/moin/PyQt) due to its licensing requirements in a commercial application. | 2012/01/06 | [
"https://Stackoverflow.com/questions/8758354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1114830/"
] | Hummm. . . .
Hard to believe that Qt is forbidden for commercial use, as it has been created by some of the most important companies in the world . . . <http://qt.nokia.com/>
Go for pyQt ;) | Pyside might be the best bet for you :
<http://www.pyside.org/>
It is basically Qt but under the LGPL license, which means you can use it in your commercial application. | 572 |
68,935,814 | I would like to know how to run the following cURL request using python (I'm working in Jupyter notebook):
```
curl -i -X GET "https://graph.facebook.com/{graph-api-version}/oauth/access_token?
grant_type=fb_exchange_token&
client_id={app-id}&
client_secret={app-secret}&
fb_exchange_token={your-access-token}"
```
I've seen some similar questions and answers suggesting using "requests.get", but I am a complete python newbie and am not sure how to structure the syntax for whole request including the id, secret and token elements. Any help would be really appreciated.
Thanks! | 2021/08/26 | [
"https://Stackoverflow.com/questions/68935814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16757711/"
] | `decode(String)` returns a `byte[]`, you need to convert that to a string using a `String` constructor and not the `toString()` method:
```java
byte[] bytes = java.util.Base64.getDecoder().decode(encodedstring);
String s = new String(bytes, java.nio.charset.StandardCharsets.UTF_8);
``` | It looks like you need mime decoder message
```js
java.util.Base64.Decoder decoder = java.util.Base64.getMimeDecoder();
// Decoding MIME encoded message
String dStr = new String(decoder.decode(encodedstring));
System.out.println("Decoded message: "+dStr);
``` | 580 |
58,706,091 | I am using cplex .dll file in python to solve a well-formulated lp problem using pulp solver. Here is the code
here model is pulp object created using pulp library
====================================================
When I run a.actualSolve(Model) I get following error from subprocess.py file.
OSError: [WinError 193] %1 is not a valid Win32 application
I tried with python 32 bit and 64 bit but couldn't solve it.
import pulp
a = pulp.solvers.CPLEX\_CMD("cplex dll file location")
a.actualSolve(model)
I expect the cplex dll file to solve my formulated optimization model and give me a solution for all the variables. | 2019/11/05 | [
"https://Stackoverflow.com/questions/58706091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6369726/"
] | Like the error says, you need to put the closing curly brace on the same line as the subsequent block after the `else`:
```
if (err.status === 'not found') {
cb({ statusCode: 404 })
return
} else { // <--- now } is on same line as {
cb({ statusCode: 500 })
return
}
```
From an example from [the docs](https://standardjs.com/rules-en.html) on Standard linting:
>
> Keep else statements on the same line as their curly braces.
>
>
> eslint: brace-style
>
>
>
> ```
> // β ok
> if (condition) {
> // ...
> } else {
> // ...
> }
>
> // β avoid
> if (condition) {
> // ...
> }
> else {
> // ...
> }
>
> ```
>
> | **Use below format when you face above error in typescript eslint.**
```
if (Logic1) {
//content1
} else if (Logic2) {
//content2
} else if (Logic3) {
//content3
} else {
//content4
}
``` | 581 |
6,369,697 | When I run `python manage.py shell`, I can print out the python path
```
>>> import sys
>>> sys.path
```
What should I type to introspect all my django settings ? | 2011/06/16 | [
"https://Stackoverflow.com/questions/6369697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/450278/"
] | ```
from django.conf import settings
dir(settings)
```
and then choose attribute from what `dir(settings)` have shown you to say:
```
settings.name
```
where `name` is the attribute that is of your interest
Alternatively:
```
settings.__dict__
```
prints all the settings. But it prints also the module standard attributes, which may somewhat clutter the output. | To show all django settings (including default settings not specified in your local settings file):
```
from django.conf import settings
dir(settings)
``` | 582 |
8,114,826 | Hi I'm working on converting perl to python for something to do.
I've been looking at some code on hash tables in perl and I've come across a line of code that I really don't know how it does what it does in python. I know that it shifts the bit strings of page by 1
```
%page_table = (); #page table is a hash of hashes
%page_table_entry = ( #page table entry structure
"dirty", 0, #0/1 boolean
"referenced", 0, #0/1 boolean
"valid", 0, #0/1 boolean
"frame_no", -1, #-1 indicates an "x", i.e. the page isn't in ram
"page", 0 #used for aging algorithm. 8 bit string.);
@ram = ((-1) x $num_frames);
```
Could someone please give me an idea on how this would be represented in python? I've got the definitions of the hash tables done, they're just there as references as to what I'm doing. Thanks for any help that you can give me.
```
for($i=0; $i<@ram; $i++){
$page_table{$ram[$i]}->{page} = $page_table{$ram[$i]}->{page} >> 1;}
``` | 2011/11/13 | [
"https://Stackoverflow.com/questions/8114826",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1044593/"
] | The only thing confusing is that page table is a hash of hashes. $page\_table{$v} contains a hashref to a hash that contains a key 'page' whose value is an integer. The loop bitshifts that integer but is not very clear perl code. Simpler would be:
```
foreach my $v (@ram) {
$page_table{$v}->{page} >>= 1;
}
```
Now the translation to python should be obvious:
```
for v in ram:
page_table[v][page] >>= 1
``` | Woof! No wonder you want to try Python!
Yes, Python can do this because Python dictionaries (what you'd call hashes in Perl) can contain other arrays or dictionaries without doing references to them.
However, I **highly** suggest that you look into moving into object oriented programming. After looking at that assignment statement of yours, I had to lie down for a bit. I can't imagine trying to maintain and write an entire program like that.
Whenever you have to do a hash that contains an array, or an array of arrays, or a hash of hashes, you should be looking into using object oriented code. Object oriented code can prevent you from making all the sorts of errors that happen when you do that type of stuff. And, it can make your code much more readable -- even Perl code.
Take a look at the [Python Tutorial](http://docs.python.org/tutorial/) and take a look at the [Perl Object Oriented Tutorial](http://perldoc.perl.org/perlboot.html) and learn a bit about object oriented programming.
This is especially true in Python which was written from the ground up to be object oriented. | 591 |
62,393,428 | ```
drivers available with me
**python shell**
'''In [2]: pyodbc.drivers()'''
**Output:**
**Out[2]: ['SQL Server']**
code in settings.py django:
**Settings.py in django**
'''# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'NAME': 'dbname',
'HOST': 'ansqlserver.database.windows.net',
'USER': 'test',
'PASSWORD': 'Password',
'OPTIONS': {
'driver': 'SQL Server',
}
}'''
**ERROR:**
**Trying to connect to MicrsoftSQL server facing below error**
```
File "C:\Local\Programs\Python\Python37\lib\site-packages\sql\_server\pyodbc\base.py", line 314,
in get\_new\_connectiontimeout=timeout)
django.db.utils.OperationalError: ('08001', '[08001] [Microsoft][ODBC SQL Server Driver]Neither DSNnor SERVER keyword supplied (0) (SQLDriverConnect); [08001] [Microsoft][ODBC SQL Server Driver]Invalid connection string attribute (0)') | 2020/06/15 | [
"https://Stackoverflow.com/questions/62393428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13458554/"
] | Though the documentation suggests using the framework `SearchView`, I've always found that the support/androidx `SearchView` plays nicer with the library components β e.g., `AppCompatActivity`, `MaterialToolbar`, etc. β though I'm not sure exactly what causes these little glitches. Indeed, using `androidx.appcompat.widget.SearchView` here in lieu of `android.widget.SearchView` for the `actionViewClass` got rid of that misplaced search icon upon expanding.
However, the `AutoCompleteTextView` inside the `SearchView` still has a similar search icon as a hint because it's not ending up with the right style. I initially expected that setting the `Toolbar` as the support `ActionBar` would've integrated that with the other relevant styles for the children, but it seems `SearchView`'s style, for some reason, is normally set with a `ThemeOverlay.*.ActionBar` on the `<*Toolbar>` acting as the `ActionBar`.
Though most sources seem to indicate that the various `ThemeOverlay.*.ActionBar` styles only adjust the `colorControlNormal` attribute, they actually set the `searchViewStyle` to the appropriate `Widget.*.SearchView.ActionBar` value, too, so it's doubly important that we add a proper overlay. For example, in keeping with changing to the `androidx` version:
```xml
<com.google.android.material.appbar.MaterialToolbar
android:id="@+id/toolbar"
android:theme="@style/ThemeOverlay.MaterialComponents.Dark.ActionBar"
... />
```
This could also work by setting that as the `actionBarTheme` in your `Activity`'s theme instead, but be warned that it can be overridden by attributes on the `<*Toolbar>` itself, like it would be in the given setup by `style="@style/Widget.MaterialComponents.Toolbar.Primary"`.
If you're not using Material Components, `ThemeOverlay.AppCompat` styles are available as well. And if you're using only platform classes, similar styles are available in the system namespace; e.g., `@android:style/ThemeOverlay.Material.Dark.ActionBar`.
---
The initial revision of this answer removed that hint icon manually, as at the time I was unaware of how exactly the given setup was failing. It shouldn't be necessary to do that now, but if you'd like to customize this further, that example simply replaced the menu `<item>`'s `app:actionViewClass` attribute with an `app:actionLayout` pointing to this layout:
```xml
<androidx.appcompat.widget.SearchView
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:id="@+id/search_view"
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:searchHintIcon="@null" />
```
The `searchHintIcon` setting is all that was needed for the example here, but you can set whatever applicable `SearchView` attributes you'd like.
If you're going this route, it might be preferable to set `style="@style/Widget.AppCompat.SearchView.ActionBar"`, which includes the `searchHintIcon` setting, and ensures the correct overall style for the `SearchView`, as suggested by Artem Mostyaev in comments below. | the above method does not work for me. I don't know why but a tried this and succeed.
Refer to the search hint icon through SearchView and set it's visibility to GONE:
```
ImageView icon = (ImageView) mSearchView.findViewById(androidx.appcompat.R.id.search_mag_icon);
icon.setVisibility(View.GONE);
```
And then add this line:
```
mSearchView.setIconified(false);
``` | 594 |
28,986,131 | I need to load 1460 files into a list, from a folder with 163.360 files.
I use the following python code to do this:
```
import os
import glob
Directory = 'C:\\Users\\Nicolai\\Desktop\\sealev\\dkss_all'
stationName = '20002'
filenames = glob.glob("dkss."+stationName+"*")
```
This has been running fine so far, but today when I booted my machine and ran the code it was just stuck on the last line. I tried to reboot, and it didn't help, in the end I just let it run, went to lunch break, came back and it was finished. It took 45 minutes. Now when I run it it takes less than a second, what is going on? Is this a cache thing? How can I prevent having to wait 45 minutes again? Any explanations would be much appreciated. | 2015/03/11 | [
"https://Stackoverflow.com/questions/28986131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1972356/"
] | Presuming that `ls` on that same directory is just as slow, you can't reduce the total time needed for the directory listing operation. Filesystems are slow sometimes (which is why, yes, the operating system *does* cache directory entries).
However, there actually *is* something you can do in your Python code: You can operate on filenames as they come in, rather than waiting for the entire result to finish before the rest of your code even starts. Unfortunately, this is functionality not present in the standard library, meaning you need to call C functions.
See [Ben Hoyt's scandir module](https://github.com/benhoyt/scandir) for an implementation of this. See also [this StackOverflow question, describing the problem](http://stackoverflow.com/questions/4403598/list-files-in-a-folder-as-a-stream-to-begin-process-immediately).
Using scandir might look something like the following:
```
prefix = 'dkss.%s.' % stationName
for direntry in scandir(path='.'):
if direntry.name.startswith(prefix):
pass # do whatever work you want with this file here.
``` | Yes, it is a caching thing. Your harddisk is a slow peripheral, reading 163.360 filenames from it can take some time. Yes, your operating system caches that kind of information for you. Python has to wait for that information to be loaded before it can filter out the matching filenames.
You don't have to wait all that time again until your operating system decides to use the memory caching the directory information for something else, or you restart the computer. Since you rebooted your computer, the information was no longer cached. | 595 |
21,869,675 | ```
list_ = [(1, 'a'), (2, 'b'), (3, 'c')]
item1 = 1
item2 = 'c'
#hypothetical:
assert list_.index_by_first_value(item1) == 0
assert list_.index_by_second_value(item2) == 2
```
What would be the fastest way to emulate the `index_by_first/second_value` method in python?
If you don't understand what's going on; if you have a list of tuples (as is contained in `list_`), how would you go about finding the index of a tuple with the first/second value of the tuple being the element you want to index?
---
My best guess would be this:
```
[i[0] for i in list_].index(item1)
[i[1] for i in list_].index(item2)
```
But I'm interested in seeing what you guys will come up with. Any ideas? | 2014/02/19 | [
"https://Stackoverflow.com/questions/21869675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3002473/"
] | At first, I thought along [the same lines as Nick T](https://stackoverflow.com/a/21869852/418413). Your method is fine if the number of tuples (N) is short. But of course a linear search is O(N). As the number of tuples increases, the time increases directly with it. You can get O(1) lookup time with a dict mapping the zeroth element of each tuple to its index:
```
{el[0]:idx for idx,el in enumerate(list_)}
```
But the cost of converting the list to a dict may be too high! Here are my results:
```
>>> from timeit import timeit as t
>>> t('[i[0] for i in list_].index(1)', "import random;list_=[(i,'a') for i in range(10)]; random.shuffle(list_)")
1.557116985321045
>>> t('[i[0] for i in list_].index(1)', "import random;list_=[(i,'a') for i in range(100)]; random.shuffle(list_)")
7.415766954421997
>>> t('{el[0]:idx for idx,el in enumerate(list_)}[1]', "import random;list_=[(i,'a') for i in range(10)]; random.shuffle(list_)")
2.1753010749816895
>>> t('{el[0]:idx for idx,el in enumerate(list_)}[1]', "import random;list_=[(i,'a') for i in range(100)]; random.shuffle(list_)")
15.062835216522217
```
So the list-to-dict conversion is killing any benefit we get from having the O(1) lookups. But just to prove that the dict is really fast if we can avoid doing the conversion more than once:
```
>>> t('dict_[1]', "import random;list_=[(i,'a') for i in range(10)];random.shuffle(list_);dict_={el[0]:idx for idx,el in enumerate(list_)}")
0.050583839416503906
>>> t('dict_[1]', "import random;list_=[(i,'a') for i in range(100)];random.shuffle(list_);dict_={el[0]:idx for idx,el in enumerate(list_)}")
0.05001211166381836
>>> t('dict_[1]', "import random;list_=[(i,'a') for i in range(1000)];random.shuffle(list_);dict_={el[0]:idx for idx,el in enumerate(list_)}")
0.050894975662231445
``` | Searching a list is O(n). Convert it to a dictionary, then lookups take O(1).
```
>>> list_ = [(1, 'a'), (2, 'b'), (3, 'c')]
>>> dict(list_)
{1: 'a', 2: 'b', 3: 'c'}
>>> dict((k, v) for v, k in list_)
{'a': 1, 'c': 3, 'b': 2}
```
If you want the original index you could enumerate it:
```
>>> dict((kv[0], (i, kv[1])) for i, kv in enumerate(list_))
{1: (0, 'a'), 2: (1, 'b'), 3: (2, 'c')}
>> dict((kv[1], (i, kv[0])) for i, kv in enumerate(list_))
{'a': (0, 1), 'c': (2, 3), 'b': (1, 2)}
``` | 596 |
36,584,975 | I've a little problem with my code.
I tried to rewrite code from python to java.
In Python it's:
```
data = bytearray(filesize)
f.readinto(data)
```
Then I tried to write it in java like this:
```
try {
data = Files.readAllBytes(file.toPath());
} catch (IOException ex) {
Logger.getLogger(Encrypter.class.getName()).log(Level.SEVERE, null, ex);
}
for(int index : data) {
data[index] = (byte) ((byte) Math.pow(data[index], genfun((fileSize), index)) & 0xFF);
}
```
Everything seems to be good for me but when I compile it and there is an java.lang.ArrayIndexOutOfBoundsException: -77
Has anyone have a clue or can rewrite it better? | 2016/04/12 | [
"https://Stackoverflow.com/questions/36584975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6195753/"
] | Since `@metrics` is an array, it doesn't look like you're calling any code on your model at all so your model code isn't actually doing anything.
This code your controller will generate the output you're looking for:
```
CSV.generate do |csv|
@metrics.each { |item| csv << [item] }
end
``` | This is just a guess, but try formatting `@metrics` as an array of arrays: so each element of `@metrics` is its own array. It seems likely that `to_csv` treats an array like a row, so you need an array of arrays to generate new lines.
```
[["Group Name,1"], ["25"], ["44,2,5"]]
```
**UPDATE**
Looking at your code again, `@model` is not an instance of any model. It is simply an array. When you call `to_csv` on it, it is not reading any methods referenced in your model. I'm guessing that ruby's built in `Array` object has a `to_csv` method baked in which is being called and explains why you aren't getting any errors. @Anthony E has correctly said said this in his answer. (though I suspect that my answer will also work). | 604 |
31,767,709 | What's a good command from term to render all images in a dir into one browser window?
Looking for something like this:
`python -m SimpleHTTPServer 8080`
But instead of a list ...
... Would like to see **all the images rendered in a single browser window**, just flowed naturally, at natural dimensions, just scroll down for how many images there are to see them all in their natural rendered state. | 2015/08/02 | [
"https://Stackoverflow.com/questions/31767709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1618304/"
] | I found a perl CGI script to do this:
```
#!/usr/bin/perl -wT
# myscript.pl
use strict;
use CGI;
use Image::Size;
my $q = new CGI;
my $imageDir = "./";
my @images;
opendir DIR, "$imageDir" or die "Can't open $imageDir $!";
@images = grep { /\.(?:png|gif|jpg)$/i } readdir DIR;
# @images = grep { /\.(?:png|gif|jpg|webm|web|mp4|svg)$/i } readdir DIR;)
closedir DIR;
print $q->header("text/html"),
$q->start_html("Images in the directory you specified."),
$q->h1("Images in the directory your specified.");
foreach my $image (@images) {
my ($width, $height) = imgsize("$image");
print $q->a({-href=>$image},
$q->img({-src=>$image,
-width=>$width,
-height=>$height})
);
}
print $q->end_html;
```
to run on MacOS you'll need to install these modules like this:
`cpan CGI`
`cpan Image::Size`
Put the sript in the directory that contains the images you want to preview.
β¦then say `perl -wT myscript.pl > output.html`
Open the generated `output.html` to see all the images in a single browser window at their natural dimensions.
Related to this question and answer: [How to run this simple Perl CGI script on Mac from terminal?](https://stackoverflow.com/questions/61927403/how-to-run-this-simple-perl-cgi-script-on-mac-from-terminal) | This is quite easy, you can program something like this in a couple of minutes.
Just create an array of all the images in ./ create a var s = '' and appen for each img in ./ '>
' and send it to the webbrowser the server->google is your friend | 605 |
41,708,458 | I have many bash scripts to help set my current session environment variables. I need the env variables set so I can use the subprocess module to run commands in my python scripts. This is how I execute the bash scripts:
```
. ./file1.sh
```
Below is the beginning of the bash script:
```
echo "Setting Environment Variable..."
export HORCMINST=99
echo $HORCMINST
...
```
Is there a way to call these bash scripts from a python script or do something similar within a python script? | 2017/01/17 | [
"https://Stackoverflow.com/questions/41708458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7259469/"
] | ### Using `shell=True` With Your Existing Script
First, in terms of the *very simplest thing* -- if you're using `shell=True`, you can tell the shell that starts to run the contents of your preexisting script unmodified.
That is to say -- if you were initially doing this:
```
subprocess.Popen(['your-command', 'arg1', 'arg2'])
```
...then you can do the following to execute that same command, with almost the same security guarantees (the only additional vulnerabilities, so long as the contents of `file1.sh` are trusted, are to out-of-band issues such as shellshock):
```
# this has the security of passing explicit out-of-band args
# but sources your script before the out-of-process command
subprocess.Popen(['. "$1"; shift; exec "$@"', "_", "./file1.sh",
"your-command", "arg1", "arg2"], shell=True)
```
---
### Using `/proc/self/environ` to export environment variables in a NUL-delimited stream
The ideal thing to do is to export your environment variables in an unambiguous form -- a NUL-delimited stream is ideal -- and then parse that stream (which is in a very unambiguous format) in Python.
Assuming Linux, you can export the complete set of environment variables as follows:
```
# copy all our environment variables, in a NUL-delimited stream, to myvars.environ
cat </proc/self/environ >myvars.environ
```
...or you can export a specific set of variables by hand:
```
for varname in HORCMINST PATH; do
printf '%s=%s\0' "$varname" "${!varname}"
done >myvars.environ
```
---
### Reading and parsing a NUL-delimited stream in Python
Then you just need to read and parse them:
```
#!/usr/bin/env python
env = {}
for var_def in open('myvars.environ', 'r').read().split('\0'):
(key, value) = var_def.split('=', 1)
env[key] = value
import subprocess
subprocess.Popen(['your-command', 'arg1', 'arg2'], env=env)
```
You could also immediately apply those variables by running `os.environ[key]=value`.
---
### Reading and parsing a NUL-delimited stream in bash
Incidentally, that same format is also easy to parse in bash:
```
while IFS= read -r -d '' var_def; do
key=${var_def%%=*}
value=${var_def#*=}
printf -v "$key" '%s' "$value"
export "$key"
done <myvars.environ
# ...put the rest of your bash script here
```
---
Now, *why* a NUL-delimited stream? Because environment variables are C strings -- unlike Python strings, they can't contain NUL. As such, NUL is the one and only character that can be safely used to delimit them.
For instance, someone who tried to use newlines could be stymied by an environment variable that *contained* a literal newline -- and if someone is, say, embedding a short Python script inside an environment variable, that's a very plausible event! | You should consider the Python builtin `os` [module](https://docs.python.org/2/library/os.html). The attribute
`os.environ` is a dictionary of environment variables that you can *read*, e.g.
```
import os
os.environ["USER"]
```
You cannot, however, *write* bash environment variables from the child process (see e.g., [How to use export with Python on Linux](https://stackoverflow.com/questions/1506010/how-to-use-export-with-python-on-linux)). | 606 |
58,225,904 | I have a multiline string in python that looks like this
```
"""1234 dog list some words 1432 cat line 2 1789 cat line3 1348 dog line 4 1678 dog line 5 1733 fish line 6 1093 cat more words"""
```
I want to be able to group specific lines by the animals in python. So my output would look like
```
dog
1234 dog list some words
1348 dog line 4
1678 dog line 5
cat
1432 cat line 2
1789 cat line3
1093 cat more words
fish
1733 fish line 6
```
So far I know that I need to split the text by each line
```
def parser(txt):
for line in txt.splitlines():
print(line)
```
But I'm not sure how to continue. How would I group each line with an animal? | 2019/10/03 | [
"https://Stackoverflow.com/questions/58225904",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9476376/"
] | >
> Or maybe there is a simpler way of archiving this?
>
>
>
Consider *option would be to have a function for each type* that is called by *the same function*.
```
void testVariableInput_int(const int *a, const int *b, int *out, int m) {
while (m > 0) {
m--;
out[m] = a[m] + b[m];
}
}
// Like-wise for the other 2
void testVariableInput_float(const float *a, const float *b, float *out, int m) {...}
void testVariableInput_double(const double *a, const double *b, double *out, int m){...}
void testVariableInput(void *a, void *b, void *out, int m, int type) {
switch (type) {
case 1 : testVariableInput_int(a, b, out, m); break;
case 2 : testVariableInput_float(a, b, out, m); break;
case 3 : testVariableInput_double(a, b, out, m); break;
}
}
```
Sample use
```
float a[] = {1, 2, 3};
float b[] = {4, 5, 6};
float c[] = {0, 0, 0};
#define N (sizeof c/sizeof c[0])
#define TYPE_FLOAT 2
testVariableInput(a, b, c, N, TYPE_FLOAT);
```
In C, drop unneeded casting by taking advantage that a `void *` converts to any object pointer without a cast as well as any object pointer converts to a `void *` without a cast too.
>
> Advanced
>
>
>
Research `_Generic` to avoid the need for `int type`.
Untested sample code:
```
#define testVariableInput(a, b, c) _Generic(*(a), \
double: testVariableInput_double, \
float: testVariableInput_float, \
int: testVariableInput_int, \
default: testVariableInput_TBD, \
)((a), (b), (c), sizeof (a)/sizeof *(a))
float a[] = {1, 2, 3};
float b[] = {4, 5, 6};
float c[] = {0, 0, 0};
testVariableInput(a, b, c);
```
`_Generic` is a bit tricky to use. For OP I recommend sticking with the non-`_Generic` approach. | >
> Or maybe there is a simpler way of achieving this?
>
>
>
I like function pointers. Here we can pass a function pointer that adds two elements. That way we can separate the logic of the function from the abstraction that handles the types.
```
#include <stdlib.h>
#include <stdio.h>
void add_floats(const void *a, const void *b, void *res){
*(float*)res = *(const float*)a + *(const float*)b;
}
void add_ints(const void *a, const void *b, void *res) {
*(int*)res = *(const int*)a + *(const int*)b;
}
void add_doubles(const void *a, const void *b, void *res) {
*(double*)res = *(const double*)a + *(const double*)b;
}
void testVariableInput(const void *a, const void *b, void *out,
// arguments like for qsort
size_t nmemb, size_t size,
// the function that adds two elements
void (*add)(const void *a, const void *b, void *res)) {
// we cast to all pointers to char to increment them properly
const char *ca = a;
const char *cb = b;
char *cout = out;
for (size_t i = 0; i < nmemb; ++i) {
add(ca, cb, cout);
ca += size;
cb += size;
cout += size;
}
}
#define testVariableInput_g(a, b, out, nmemb) \
testVariableInput((a), (b), (out), (nmemb), sizeof(*(out)), \
_Generic((out), float *: add_floats, int *: add_ints, double *: add_doubles));
int main() {
float a[] = {1, 2, 3};
float b[] = {4, 5, 6};
float c[] = {0, 0, 0};
testVariableInput(a, b, c, 3, sizeof(float), add_floats);
testVariableInput_g(a, b, c, 3);
}
```
With the help of \_Generic, we can also automagically infer what function callback to pass to the function for limited number of types. Also it's easy to handle new, custom types to the function, without changing it's logic. | 608 |
64,771,870 | I am using a colab pro TPU instance for the purpose of patch image classification.
i'm using tensorflow version 2.3.0.
When calling model.fit I get the following error: `InvalidArgumentError: Unable to find the relevant tensor remote_handle: Op ID: 14738, Output num: 0` with the following trace:
```
--------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-20-5fd2ec1ce2f9> in <module>()
15 steps_per_epoch=STEPS_PER_EPOCH,
16 validation_data=dev_ds,
---> 17 validation_steps=VALIDATION_STEPS
18 )
6 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
--> 108 return method(self, *args, **kwargs)
109
110 # Running inside `run_distribute_coordinator` already.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1084 data_handler._initial_epoch = ( # pylint: disable=protected-access
1085 self._maybe_load_initial_epoch_from_ckpt(initial_epoch))
-> 1086 for epoch, iterator in data_handler.enumerate_epochs():
1087 self.reset_metrics()
1088 callbacks.on_epoch_begin(epoch)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in enumerate_epochs(self)
1140 if self._insufficient_data: # Set by `catch_stop_iteration`.
1141 break
-> 1142 if self._adapter.should_recreate_iterator():
1143 data_iterator = iter(self._dataset)
1144 yield epoch, data_iterator
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in should_recreate_iterator(self)
725 # each epoch.
726 return (self._user_steps is None or
--> 727 cardinality.cardinality(self._dataset).numpy() == self._user_steps)
728
729 def _validate_args(self, y, sample_weights, steps):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in numpy(self)
1061 """
1062 # TODO(slebedev): Consider avoiding a copy for non-CPU or remote tensors.
-> 1063 maybe_arr = self._numpy() # pylint: disable=protected-access
1064 return maybe_arr.copy() if isinstance(maybe_arr, np.ndarray) else maybe_arr
1065
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _numpy(self)
1029 return self._numpy_internal()
1030 except core._NotOkStatusException as e: # pylint: disable=protected-access
-> 1031 six.raise_from(core._status_to_exception(e.code, e.message), None) # pylint: disable=protected-access
1032
1033 @property
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)
InvalidArgumentError: Unable to find the relevant tensor remote_handle: Op ID: 14738, Output num: 0
```
H have two dataset zip files containing 300,000> and 100,000< training and validation examples which I download from my Google Drive using !gdown and unzip it on Colab VM. For data pipeline I use tf.data.Dataset API and feed the API with list of filepaths and then use .map method to perform image fetching from memory, **please keep in mind that my training dataset can't be fit into memory**
Here is the code for creating Dataset:
```
train_dir = '/content/content/Data/train'
dev_dir = '/content/content/Data/dev'
def create_dataset(dir, label_dic, is_training=True):
filepaths = list(tf.data.Dataset.list_files(dir + '/*.jpg'))
labels = []
for f in filepaths:
ind = f.numpy().decode().split('/')[-1].split('.')[0]
labels.append(label_dic[ind])
ds = tf.data.Dataset.from_tensor_slices((filepaths, labels))
ds = ds.map(load_images, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.cache()
if is_training:
ds = ds.shuffle(len(filepaths), reshuffle_each_iteration=True)
ds = ds.repeat(EPOCHS)
ds = ds.batch(BATCH_SIZE)
ds = ds.prefetch(tf.data.experimental.AUTOTUNE)
return ds
train_ds = create_dataset(train_dir, train_label)
dev_ds = create_dataset(dev_dir, dev_label, False)
```
And here is the code for creating and compiling my model and fitting the datasets, I use a keras custom model with VGG16 backend:
```
def create_model(input_shape, batch_size):
VGG16 = keras.applications.VGG16(include_top=False,input_shape=input_shape, weights='imagenet')
for layer in VGG16.layers:
layer.trainable = False
input_layer = keras.Input(shape=input_shape, batch_size=batch_size)
VGG_out = VGG16(input_layer)
x = Flatten(name='flatten', input_shape=(512,8,8))(VGG_out)
x = Dense(256, activation='relu', name='fc1')(x)
x = Dropout(0.5)(x)
x = Dense(1, activation='sigmoid', name='fc2')(x)
model = Model(input_layer, x)
model.summary()
return model
with strategy.scope():
model = create_model(INPUT_SHAPE, BATCH_SIZE)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(train_ds,
epochs=5,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=dev_ds,
validation_steps=VALIDATION_STEPS
)
```
**For TPU initialization and strategy**I use a `strategy = tf.distribute.TPUStrategy(resolver)`
Initialization code shown below:
```
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
```
a copy of the whole notebook with outputs can be reached at: [Colab Ipython Notebook](https://github.com/Pooya448/Tumor_Segmentation/blob/main/Patch_Classification.ipynb) | 2020/11/10 | [
"https://Stackoverflow.com/questions/64771870",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8777119/"
] | @Pooya448
I know this is quite late, but this may be useful for anyone stuck here.
Following is the function I use to connect to TPUs.
```py
def connect_to_tpu(tpu_address: str = None):
if tpu_address is not None: # When using GCP
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=tpu_address)
if tpu_address not in ("", "local"):
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
strategy = tf.distribute.experimental.TPUStrategy(cluster_resolver)
print("Running on TPU ", cluster_resolver.master())
print("REPLICAS: ", strategy.num_replicas_in_sync)
return cluster_resolver, strategy
else: # When using Colab or Kaggle
try:
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
strategy = tf.distribute.experimental.TPUStrategy(cluster_resolver)
print("Running on TPU ", cluster_resolver.master())
print("REPLICAS: ", strategy.num_replicas_in_sync)
return cluster_resolver, strategy
except:
print("WARNING: No TPU detected.")
mirrored_strategy = tf.distribute.MirroredStrategy()
return None, mirrored_strategy
``` | I actually tried all the methods that are suggested in git and stackoverflow none of them worked for me. But what worked is I created a new notebook and connected it to the TPU and trained the model. It worked fine, so may be this is related to the problem of the notebook at the time when we created it. | 609 |
32,017,621 | I would like to connect and receive http response from a specific web site link.
I have many Python codes :
```
import urllib.request
import os,sys,re,datetime
fp = urllib.request.urlopen("http://www.python.org")
mybytes = fp.read()
mystr = mybytes.decode(encoding=sys.stdout.encoding)
fp.close()
```
when I pass the response as a parameter to:
`BeautifulSoup(str(mystr), 'html.parser')`
to get the cleaned html text, I got the following error:
```
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u25bc' in position 1139: character maps to <undefined>.
```
The question how can I solve this problem?
**complete code :**
```
import urllib.request
import os,sys,re,datetime
fp = urllib.request.urlopen("http://www.python.org")
mybytes = fp.read()
mystr = mybytes.decode(encoding=sys.stdout.encoding)
fp.close()
from bs4 import BeautifulSoup
soup = BeautifulSoup(str(mystr), 'html.parser')
mystr = soup;
print(mystr.get_text())
``` | 2015/08/14 | [
"https://Stackoverflow.com/questions/32017621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5228214/"
] | first of all: <https://docs.python.org/2/tutorial/classes.html#inheritance>
At any rate...
```
GParent.testmethod(self) <-- calling a method before it is defined
class GParent(): <-- always inherit object on your base class to ensure you are using new style classes
def testmethod(self):
print "This is test method"
class Parent(): <-- not inheriting anything
def testmethod(self): <-- if you were inheriting GParent you would be overriding the method that is defined in GParent here.
print "This is test method"
class Child(Parent):
def __init__(self):
print "This is init method"
GParent.testmethod(self) <-- if you want to call the method you are inheriting you would use self.testmethod()
c = Child()
```
Take a look at this code and run it, maybe it will help you out.
```
from __future__ import print_function #so we can use python 3 print function
class GParent(object):
def gparent_testmethod(self):
print("Grandparent test method ")
class Parent(GParent):
def parent_testmethod(self): #
print("Parent test method")
class Child(Parent):
def child_testmethod(self):
print("This is the child test method")
c = Child()
c.gparent_testmethod()
c.parent_testmethod()
c.child_testmethod()
``` | You cannot call GParent's `testmethod` without an instance of `GParent` as its first argument.
**Inheritance**
```
class GParent(object):
def testmethod(self):
print "I'm a grandpa"
class Parent(GParent):
# implicitly inherit __init__()
# inherit and override testmethod()
def testmethod(self):
print "I'm a papa"
class Child(Parent):
def __init__(self):
super(Child, self).__init__()
# You can only call testmethod with an instance of Child
# though technically it is calling the parent's up the chain
self.testmethod()
# inherit parent's testmethod implicitly
c = Child() # print "I'm a papa"
```
However, two ways of calling a parent's method explicitly is through composition or class method
**Composition**
```
class Parent(object):
def testmethod(self):
print "I'm a papa"
class Child(object):
def __init__(self):
self.parent = Parent()
# call own's testmethod
self.testmethod()
# call parent's method
self.parentmethod()
def parentmethod(self):
self.parent.testmethod()
def testmethod(self):
print "I'm a son"
c = Child()
```
**Class method**
```
class Parent(object):
@classmethod
def testmethod(cls):
print "I'm a papa"
class Child(object):
def __init__(self):
# call own's testmethod
self.testmethod()
# call parent's method
Parent.testmethod()
def testmethod(self):
print "I'm a son"
c = Child()
```
It has become advisory to use composition when dealing with multiple inheritance, since inheritance creates dependency to the parent class. | 610 |
56,711,890 | If I had a function that had three or four optional keyword arguments is it best to use \*\*kwargs or to specify them in the function definition?
I feel as
`def foo(required, option1=False, option2=False, option3=True)`
is much more clumsy looking than
`def foo(required, **kwargs)`.
However if I need to use these keywords as conditionals and they don't exist I will have KeyErrors being thrown and I feel like checking for the keys before each conditional is a bit messy.
```py
def foo(required, **kwargs):
print(required)
if 'true' in kwargs and kwargs['true']:
print(kwargs['true'])
foo('test', true='True')
foo('test2')
```
vs
```py
def foo(required, true=None):
print(required)
if true:
print(true)
foo('test', true='True')
foo('test2')
```
I am wondering what the most pythonic way is. I've got a function that I am working on that depending on the parameters passed will return different values so I am wondering the best way to handle it. It works now, but I wonder if there is a better and more pythonic way of handling it. | 2019/06/22 | [
"https://Stackoverflow.com/questions/56711890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10206378/"
] | If the function is only using the parameters in its own operation, you should list them all explicitly. This will allow Python to detect if an invalid argument was provided in a call to your function.
You use `**kwargs` when you need to accept dynamic parameters, often because you're passing them along to other functions and you want your function to accept any arguments that the other function needs, e.g. `other_func(**kwargs)` | One easy way to pass in several optional parameters while keeping your function definition clean is to use a dictionary that contains all the parameters. That way your function becomes
```py
def foo(required, params):
print(required)
if 'true' in params and params['true']:
print(params['true'])
```
You really want to use `**kwargs` if your parameters can be anything and you don't really care, such as for a decorator function. If you're actually going to use the parameters in the function, you should specify them explicitly. | 611 |
26,625,845 | I work on a project in which I need a python web server. This project is hosted on Amazon EC2 (ubuntu).
I have made two unsuccessful attempts so far:
1. run `python -m SimpleHTTPServer 8080`. It works if I launch a browser on the EC2 instance and head to localhost:8080 or <*ec2-public-IP*>:8080. However I can't access the server from a browser on a remote machine (using <*ec2-public-IP*>:8080).
2. create a python class which allows me to specify both the IP address and port to serve files. Same problem as 1.
There are several questions on SO concerning Python web server on EC2, but none seems to answer my question: what should I do in order to access the python web server remotely ?
One more point: I don't want to use a Python web framework (Django or anything else): I'll use the web server to build a kind of REST API, not for serving HTML content. | 2014/10/29 | [
"https://Stackoverflow.com/questions/26625845",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3592547/"
] | you should open the 8080 port and ip limitation in security croups, such as:
All TCP TCP 0 - 65535 0.0.0.0/0
the last item means this server will accept every request from any ip and port, you also | You passble need to `IAM` in `AWS`.
`Aws` set security permission that need to open up port so you only `localhost` links your webservice
[aws link](http://aws.amazon.com/) | 612 |
51,733,698 | I have a program that right now grabs data like temperature and loads using a powershell script and the WMI. It outputs the data as a JSON file. Now let me preface this by saying this is my first time every working with JSON's and im not very familiar with the JSON python library. Here is the code to my program:
```
import subprocess
import json
p = subprocess.Popen(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", ". \"./TestScript\";", "&NSV"], stdout=subprocess.PIPE)
(output, err) = p.communicate()
data = json.loads(output)
for mNull in data:
del mNull['Scope']
del mNull['Path']
del mNull['Options']
del mNull['ClassPath']
del mNull['Properties']
del mNull['SystemProperties']
del mNull['Qualifiers']
del mNull['Site']
del mNull['Container']
del mNull['PSComputerName']
del mNull['__GENUS']
del mNull['__CLASS']
del mNull['__SUPERCLASS']
del mNull['__DYNASTY']
del mNull['__RELPATH']
del mNull['__PROPERTY_COUNT']
del mNull['__DERIVATION']
del mNull['__SERVER']
del mNull['__NAMESPACE']
del mNull['__PATH']
fdata = json.dumps(data,indent=2)
print(fdata)
```
Now here is the resulting JSON:
```
[
{
"Name": "Memory",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "Used Space",
"SensorType": "Load",
"Value": 93.12801
},
{
"Name": "CPU Core #1",
"SensorType": "Temperature",
"Value": 66
},
{
"Name": "CPU DRAM",
"SensorType": "Power",
"Value": 1.05141532
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 60.15625
},
{
"Name": "CPU Package",
"SensorType": "Power",
"Value": 15.2162886
},
{
"Name": "Bus Speed",
"SensorType": "Clock",
"Value": 100.000031
},
{
"Name": "CPU Total",
"SensorType": "Load",
"Value": 57.421875
},
{
"Name": "CPU Package",
"SensorType": "Temperature",
"Value": 69
},
{
"Name": "CPU Core #2",
"SensorType": "Clock",
"Value": 2700.00073
},
{
"Name": "Temperature",
"SensorType": "Temperature",
"Value": 41
},
{
"Name": "Used Memory",
"SensorType": "Data",
"Value": 4.215393
},
{
"Name": "Available Memory",
"SensorType": "Data",
"Value": 3.68930435
},
{
"Name": "CPU Core #1",
"SensorType": "Clock",
"Value": 3100.001
},
{
"Name": "CPU Cores",
"SensorType": "Power",
"Value": 13.3746643
},
{
"Name": "CPU Graphics",
"SensorType": "Power",
"Value": 0.119861834
},
{
"Name": "CPU Core #1",
"SensorType": "Load",
"Value": 54.6875
}
]
```
As you can see every dictionary in the list has the keys `Name`, `SensorType` and `Value`.
What I want to do is make it so that each list has a "label" equal to the `Name` in each one, so I can call for data from specific entries, one at a time. Once again, I'm kind of a newbie with JSON and its library so I'm not even sure if this sort of thing is possible. Any help would be greatly appreciated! Have a good day! :)
Edit 1:
Here is an example, using the first 2, of what I would like the program to be able to output.
```
[
"Memory":{
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2":{
"SensorType": "Temperature",
"Value": 69
}
]
```
Once again, I dont even know if this is valid JSON but I want it to just do something at least similar to that so I can call, for example, `print(data["Memory"]["Value"])` and return, `53.3276978`.
Edit 2:
It did just occur to me that there are some names with multiple sensor types, for example, `"CPU Core #1"` and `"CPU Core #2"` both have `"Tempurature"`, `"Load"`, and `"Clock"`. Using the above example could cause some conflicts so is there a way we could account for that? | 2018/08/07 | [
"https://Stackoverflow.com/questions/51733698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9801535/"
] | You can build a new dictionary in the shape you want like this:
```
...
data = {
element["Name"]: {
key: value for key, value in element.items() if key != "Name"
}
for element in json.loads(output)
}
fdata = json.dumps(data, indent=4)
...
```
Result:
```
{
"Memory": {
"SensorType": "Load",
"Value": 53.3276978
},
"CPU Core #2": {
"SensorType": "Clock",
"Value": 2700.00073
},
(and so on)
}
``` | ```
x="""[
{
"Name": "Memory 1",
"SensorType": "Load",
"Value": 53.3276978
},
{
"Name": "CPU Core #2",
"SensorType": "Load",
"Value": 53.3276978
}]"""
json_obj=json.loads(x)
new_list=[]
for item in json_obj:
name=item.pop('Name')
new_list.append({name:item})
print(json.dumps(new_list,indent=4))
```
Output
```
[
{
"Memory 1": {
"SensorType": "Load",
"Value": 53.3276978
}
},
{
"CPU Core #2": {
"SensorType": "Load",
"Value": 53.3276978
}
}
]
``` | 613 |
24,888,691 | Well, I finally got cocos2d-x into the IDE, and now I can make minor changes like change the label text.
But when trying to add a sprite, the app crashes on my phone (Galaxy Ace 2), and I can't make sense of the debug output.
I followed [THIS](http://youtu.be/2LI1IrRp_0w) video to set up my IDE, and i've literally just gone to add a sprite in the template project...
Could someone help me fix this please:
```
07-22 13:22:32.310: D/PhoneWindow(22070): couldn't save which view has focus because the focused view org.cocos2dx.lib.Cocos2dxGLSurfaceView@405240c8 has no id.
07-22 13:22:32.930: V/SurfaceView(22070): org.cocos2dx.lib.Cocos2dxGLSurfaceView@405240c8 got app visibiltiy is changed: false
07-22 13:22:32.930: I/GLThread(22070): noticed surfaceView surface lost tid=12
07-22 13:22:32.930: W/EglHelper(22070): destroySurface() tid=12
07-22 13:22:32.960: D/CLIPBOARD(22070): Hide Clipboard dialog at Starting input: finished by someone else... !
07-22 13:23:05.190: W/dalvikvm(22133): threadid=1: thread exiting with uncaught exception (group=0x4001e578)
07-22 13:23:05.190: E/AndroidRuntime(22133): FATAL EXCEPTION: main
07-22 13:23:05.190: E/AndroidRuntime(22133): java.lang.UnsatisfiedLinkError: Couldn't load cocos2dcpp: findLibrary returned null
07-22 13:23:05.190: E/AndroidRuntime(22133): at java.lang.Runtime.loadLibrary(Runtime.java:429)
07-22 13:23:05.190: E/AndroidRuntime(22133): at java.lang.System.loadLibrary(System.java:554)
07-22 13:23:05.190: E/AndroidRuntime(22133): at org.cocos2dx.lib.Cocos2dxActivity.onLoadNativeLibraries(Cocos2dxActivity.java:66)
07-22 13:23:05.190: E/AndroidRuntime(22133): at org.cocos2dx.lib.Cocos2dxActivity.onCreate(Cocos2dxActivity.java:80)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1050)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1615)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1667)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread.access$1500(ActivityThread.java:117)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:935)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.os.Handler.dispatchMessage(Handler.java:99)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.os.Looper.loop(Looper.java:130)
07-22 13:23:05.190: E/AndroidRuntime(22133): at android.app.ActivityThread.main(ActivityThread.java:3691)
07-22 13:23:05.190: E/AndroidRuntime(22133): at java.lang.reflect.Method.invokeNative(Native Method)
07-22 13:23:05.190: E/AndroidRuntime(22133): at java.lang.reflect.Method.invoke(Method.java:507)
07-22 13:23:05.190: E/AndroidRuntime(22133): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:912)
07-22 13:23:05.190: E/AndroidRuntime(22133): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:670)
07-22 13:23:05.190: E/AndroidRuntime(22133): at dalvik.system.NativeStart.main(Native Method)
07-22 13:23:07.200: I/dalvikvm(22133): threadid=4: reacting to signal 3
07-22 13:23:07.200: I/dalvikvm(22133): Wrote stack traces to '/data/anr/traces.txt'
```
Thanks
---
P.S. `Cocos2dxActivity.java` has errors on line 66 & 80.
Line 66 is `System.loadLibrary(libName);` and line 80 is `onLoadNativeLibraries();` In line 65 it declares lib name as `String libName = bundle.getString("android.app.lib_name");`
Also, I can see in the Manifest that the key information is:
```
<!-- Tell Cocos2dxActivity the name of our .so -->
<meta-data android:name="android.app.lib_name"
android:value="cocos2dcpp" />
```
I do have the NDK, and I hooked it up in my ./bash\_profile. But I did just notice that the console says:
```
python /Users/damianwilliams/Desktop/KittyKatch/proj.android/build_native.py -b release all
NDK_ROOT not defined. Please define NDK_ROOT in your environment
```
But I know I have it in my bash since my bash profile says:
```
# Add environment variable COCOS_CONSOLE_ROOT for cocos2d-x
export COCOS_CONSOLE_ROOT=/Users/damianwilliams/Desktop/Android-Development-Root/cocos2d-x-3.2rc0/tools/cocos2d-console/bin
export PATH=$COCOS_CONSOLE_ROOT:$PATH
# Add environment variable NDK_ROOT for cocos2d-x
export NDK_ROOT=/Users/damianwilliams/Desktop/Android-Development-Root/android-ndk-r10
export PATH=$NDK_ROOT:$PATH
# Add environment variable ANT_ROOT for cocos2d-x
export ANT_ROOT=/Users/damianwilliams/Desktop/Android-Development-Root/apache-ant-1.9.4/bin
export PATH=$ANT_ROOT:$PATH
```
But I've no idea what to do with that information or if I've built it correctly. | 2014/07/22 | [
"https://Stackoverflow.com/questions/24888691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3863962/"
] | ```
var new= "\"" + string.Join( "\",\"", keys) + "\"";
```
To include a double quote in a string, you escape it with a backslash character, thus "\"" is a string consisting of a single double quote character, and "\", \"" is a string containing a double quote, a comma, a space, and another double quote. | If performance is the key, you can always use a `StringBuilder` to concatenate everything.
[Here's a fiddle](https://dotnetfiddle.net/nptVEH) to see it in action, but the main part can be summarized as:
```
// these look like snails, but they are actually pretty fast
using @_____ = System.Collections.Generic.IEnumerable<object>;
using @______ = System.Func<object, object>;
using @_______ = System.Text.StringBuilder;
public static string GetCsv(object[] input)
{
// use a string builder to make things faster
var @__ = new StringBuilder();
// the rest should be self-explanatory
Func<@_____, @______, @_____>
@____ = (_6,
_2) => _6.Select(_2);
Func<@_____, object> @_3 = _6
=> _6.FirstOrDefault();
Func<@_____, @_____> @_4 = _8
=> _8.Skip(input.Length - 1);
Action<@_______, object> @_ = (_9,
_2) => _9.Append(_2);
Action<@_______>
@___ = _7 =>
{ if (_7.Length > 0) @_(
@__, ",");
}; var @snail =
@____(input, (@_0 =>
{ @___(@__); @_(@__, @"""");
@_(@__, @_0); @_(@__, @"""");
return @__; }));
var @linq = @_4(@snail);
var @void = @_3(@linq);
// get the result
return @__.ToString();
}
``` | 620 |
6,990,760 | I wrapped opencv today with simplecv python interface. After going through the official [SimpleCV Cookbook](http://simplecv.org/doc/cookbook.html) I was able to successfully [Load, Save](http://simplecv.org/doc/cookbook.html#loading-and-saving-images), and [Manipulate](http://simplecv.org/doc/cookbook.html#image-manipulation) images. Thus, I know the library is being loaded properly.
However, under the [Using a Camera, Kinect, or Virtual Camera](http://simplecv.org/doc/cookbook.html#using-a-camera-kinect-or-virtualcamera) heading I was unsuccessful in running some commands. In particular, `mycam = Camera()` worked, but `img = mycam.getImage()` produced the following error:
```
In [35]: img = mycam.getImage().save()
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in cvGetSize, file /home/jordan/OpenCV-2.2.0/modules/core/src/array.cpp, line 1237
---------------------------------------------------------------------------
error Traceback (most recent call last)
/home/simplecv/<ipython console> in <module>()
/usr/local/lib/python2.7/dist-packages/SimpleCV-1.1-py2.7.egg/SimpleCV/Camera.pyc in getImage(self)
332
333 frame = cv.RetrieveFrame(self.capture)
--> 334 newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
335 cv.Copy(frame, newimg)
336 return Image(newimg, self)
error: Array should be CvMat or IplImage
```
I'm running Ubuntu Natty on a HP TX2500 tablet. It has a built in webcam, (CyberLink Youcam?) Has anybody seen this error before? I've been all over the web today looking for a solution, but nothing seems to be doing the trick.
**Update 1**: I tested cv.QueryFrame(capture) using the code found here [in a separate Stack Overflow question](https://stackoverflow.com/questions/4929721/opencv-python-grab-frames-from-a-video-file) and it worked; so I've pretty much nailed this down to a webcam issue.
**Update 2**: In fact, I get the exact same errors on a machine that doesn't even have a webcam! It's looking like the TX2500 is not compatible... | 2011/08/09 | [
"https://Stackoverflow.com/questions/6990760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/568884/"
] | since the error raised from Camera.py of SimpleCV, you need to debug the getImage() method. If you can edit it:
```
def getImage(self):
if (not self.threaded):
cv.GrabFrame(self.capture)
frame = cv.RetrieveFrame(self.capture)
import pdb # <-- add this line
pdb.set_trace() # <-- add this line
newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
cv.Copy(frame, newimg)
return Image(newimg, self)
```
then run your program, it will be paused as pdb.set\_trace(), here you can inspect the type of frame, and try to figure out how get the size of frame.
Or you can do the capture in your code, and inspect the frame object:
```
mycam = Camera()
cv.GrabFrame(mycam.capture)
frame = cv.RetrieveFrame(mycam.capture)
``` | I'm geting the camera with OpenCV
```
from opencv import cv
from opencv import highgui
from opencv import adaptors
def get_image()
cam = highgui.cvCreateCameraCapture(0)
im = highgui.cvQueryFrame(cam)
# Add the line below if you need it (Ubuntu 8.04+)
#im = opencv.cvGetMat(im)
return im
``` | 622 |
69,383,255 | I am trying to calculate the distance between 2 points in python using this code :
```
import math
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return "Point({0}, {1})".format(self.x, self.y)
def __sub__(self, other):
return Point(self.x - other.x, self.y - other.y) #<-- MODIFIED THIS
def distance(self, other):
p1 = __sub__(Point(self.x , other.x))**2
p2 = __sub__(Point(self.y,other.y))**2
p = math.sqrt(p1,p2)
return p
def dist_result(points):
points = [Point(*point) for point in points]
return [points[0].distance(point) for point in points]
```
but it is returning:
```
NameError: name '__sub__' is not defined
```
can you please show me how to correctly write that function ?
so I am expecting an input of:
```
1=(1,1) and 2=(2,2)
```
and I would like to calculate the distance using:
```
=|2β1|=(1β2)^2+(1β2)^2
``` | 2021/09/29 | [
"https://Stackoverflow.com/questions/69383255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16743649/"
] | ```
let newFavorites = favorites;
```
This assigns newFavorites to point to favorites
```
newFavorites.push(newFav);
```
Because newFavorites points to favorites, which is an array in `state`, you can't push anything onto it and have that change render.
What you need to do, is populate a new array `newFavorites` with the content of favorites.
Try
```
const newFavorites = [...favorites];
```
That should work | I would make some changes in your addFavourite function:
function addFavorite(name, id) {
let newFav = {name, id};
```
setFavorites([β¦favourites, newFav]);
```
}
This way, everytime you click favourite, you ensure a new array is being created with spread operator | 628 |
40,634,826 | I'm using Swig 3.0.7 to create python 2.7-callable versions of C functions that define constants in this manner:
```c
#define MYCONST 5.0
```
In previous versions of swig these would be available to python transparently:
```py
import mymodule
x = 3. * mymodule.MYCONST
```
But now this generates a message
```none
AttributeError: 'module' object has no attribute 'MYCONST'
```
Functions in 'mymodule' that use the constant internally work as expected.
Interestingly, if I include this line in the Swig directive file mymodule.i,
```c
#define MYCONST 5.0
```
then doing dir(mymodule) returns a list that includes
```
['MYCONST_swigconstant', 'SWIG_PyInstanceMethodNew', (etc.) .... ]
```
typing to the python interpreter
```
mymodule.MYCONST_swigconstant
```
gives
```
<built-in function MYCONST_swigconstant>
```
which offers no obvious way to get at the value.
So my question is, can one make the previous syntax work so that `mymodule.MYCONST` evaluates correctly
If not, is there a workaround? | 2016/11/16 | [
"https://Stackoverflow.com/questions/40634826",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3263972/"
] | You can use [`split`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html), then cast column `year` to `int` and if necessary add `Q` to column `q`:
```
df = pd.DataFrame({'date':['2015Q1','2015Q2']})
print (df)
date
0 2015Q1
1 2015Q2
df[['year','q']] = df.date.str.split('Q', expand=True)
df.year = df.year.astype(int)
df.q = 'Q' + df.q
print (df)
date year q
0 2015Q1 2015 Q1
1 2015Q2 2015 Q2
```
Also you can use [`Period`](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#period):
```
df['date'] = pd.to_datetime(df.date).dt.to_period('Q')
df['year'] = df['date'].dt.year
df['quarter'] = df['date'].dt.quarter
print (df)
date year quarter
0 2015Q1 2015 1
1 2015Q2 2015 2
``` | You could also construct a datetimeIndex and call year and quarter on it.
```
df.index = pd.to_datetime(df.date)
df['year'] = df.index.year
df['quarter'] = df.index.quarter
date year quarter
date
2015-01-01 2015Q1 2015 1
2015-04-01 2015Q2 2015 2
```
Note that you don't even need a dedicated column for year and quarter if you have a datetimeIndex, you could do a groupby like this for example: `df.groupby(df.index.quarter)` | 634 |
51,567,959 | I am sort of new to python. I can open files in Windows with but am having trouble in Mac. I can open webbrowsers but I am unsure as to how I open other programs or word documents.
Thanks | 2018/07/28 | [
"https://Stackoverflow.com/questions/51567959",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10147138/"
] | use this class `col-md-auto` to make width auto and `d-inline-block` to display column inline block (bootstrap 4)
```
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" rel="stylesheet"/>
<div class="row">
<div class="col-md-auto col-lg-auto d-inline-block">
<label for="name">Company Name</label>
<input id="name" type="text" value="" name="name" style="width:200px">
</div>
<div class="col-md-auto col-lg-auto d-inline-block">
<label for="email">GST Number</label>
<input id="email" type="text" value="" name="email">
</div>
<div class="col-md-auto col-lg-auto d-inline-block" style="">
<label for="email">Branch Address</label>
<input id="email" type="text" value="" name="email" style="width:300px">
</div>
<div class="col-md-auto col-lg-auto d-inline-block" style="">
<label for="email">Tin Number</label>
<input id="email" type="text" value="" name="email" style="width:200px">
</div>
<div class="col-md-auto col-lg-auto d-inline-block" style="">
<label for="email">pin code</label>
<input id="email" type="text" value="" name="email" style="width:100px">
</div>
<div class="col-md-auto col-lg-auto d-inline-block" style="">
<label for="email">Date</label>
<input id="email" type="text" value="" name="email" style="width:100px">
</div>
<div class="col-md-auto col-lg-auto d-inline-block" style="">
<label for="email">code</label>
<input id="email" type="text" value="" name="email" style="width:100px">
</div>
</div>
``` | I think that you can see the example below,this may satisfy your need. Also,you can set the col-x-x property to place more that 3 input in one row.
[row-col example](http://%20https://v3.bootcss.com/components/#input-groups-buttons) | 635 |
63,627,160 | 1. I am trying to get a students attendance record set up in python. I have most of it figured out. I am stuck on one section and it is the attendane section. I am trying to use a table format (tksheets) to keep record of students names and their attendance. The issue I am having is working with tksheets. I can't seem to get the information from my DB(SQLite3) to populate the columns. I've also tried tktables, and the pandastables. But again I run into the same issue.
I have considered using the Treeview Widget to populate the columns with the students names, and then use entry boxes to add the attendance. The issue is I have to create each entry box and place it individually. I didn't like this plan. Below is the current code I am using.
If anyone could show me how to get the data from the DB and populate the spreadsheet I am using that be great. Thanks.
```
def rows(self):
self.grid_columnconfigure(1, weight=1)
self.grid_rowconfigure(1,weight=1)
self.sheet = Sheet(self.aug_tab,
data=[[f'Row{r} Column{c}' for c in range(36)]for r in range(24)],
height=300,
width=900)
self.sheet.enable_bindings(("single",
"drag_select",
"column_drag_and_drop",
"row_drag_and_drop",
"column_select",
"row_select",
"column_width_resize",
"double_click_column_resize",
"row_width_resize",
"column_height_resize",
"arrowkeys",
"row_height_resize",
"double_click_row_resize",
"right_click_popup_menu",
"rc_insert_column",
"rc_delete_column",
"rc_insert_row",
"rc_delete_row",
"copy",
"cut",
"paste",
"delete",
"undo",
"edit_cell"))
self.headers_list = ("Student ID","Ch. First Name","Ch. Last Name","Eng. Name")
self.headers = [f'{c}'for c in self.headers_list]
self.sheet.headers(self.headers)
self.sheet.pack()
print(self.sheet.get_column_data(0,0))
#############DEFINE FUNCTIONS###############################
rows(self)
```
[enter image description here](https://i.stack.imgur.com/5fAag.jpg) | 2020/08/28 | [
"https://Stackoverflow.com/questions/63627160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4049491/"
] | Try Something like this-
```
';SELECT text
FROM notes
WHERE username = 'alice
``` | SQL Injection can be implemented by concatenating the SQL statement with the input parameters. For example, the following statement is vulnerable to SQL Injection:
```
String statement = "SELECT ID FROM USERS WHERE USERNAME = '" + inputUsername + "' AND PASSWORD = '" + hashedPassword + "'";
```
An attacker would enter a username like this:
```
' OR 1=1 Limit 1; --
```
Thus, the executed statement will be:
```
SELECT ID FROM USERS WHERE USERNAME = '' OR 1=1 Limit 1; --' AND PASSWORD = 'Blob'
```
Hence, the password part is commented, and the database engine would return any arbitrary result which will be acceptable by the application.
I found this nice explanation on the free preview of "Introduction to Cybersecurity for Software Developers" course.
<https://www.udemy.com/course/cybersecurity-for-developers-1/>
It also explains how to prevent SQL Injection. | 636 |
63,283,368 | I've got the problem during setting up deploying using cloudbuild and dockerfile.
My `Dockerfile`:
```
FROM python:3.8
ARG ENV
ARG NUM_WORKERS
ENV PORT=8080
ENV NUM_WORKERS=$NUM_WORKERS
RUN pip install poetry
COPY pyproject.toml poetry.lock ./
RUN poetry config virtualenvs.create false && \
poetry install --no-dev
COPY ./.env.$ENV /workspace/.env
COPY ./app-$ENV.yaml /workspace/app.yaml
COPY . /workspace
ENTRYPOINT ["./entrypoint.sh"]
```
My `cloudbuild.yaml`:
```
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
docker pull gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME || exit 0
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME',
'--cache-from',
'gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME',
'--build-arg', 'ENV=develop',
'--build-arg', 'NUM_WORKERS=2',
'.'
]
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME']
- name: 'gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME'
id: RUN-LINTERS
entrypoint: sh
args: ['scripts/linters.sh']
- name: gcr.io/cloud-builders/docker
id: START-REDIS
args: ['run', '-d', '--network=cloudbuild', '--name=redisdb', 'redis']
- name: 'gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME'
id: RUN-TESTS
entrypoint: sh
args: ['scripts/run_tests.sh']
env:
- 'REDIS_HOST=redis://redisdb'
- 'DATASTORE_EMULATOR_HOST=datastore:8081'
waitFor:
- START-REDIS
- START-DATASTORE-EMULATOR
- name: gcr.io/cloud-builders/docker
id: SHUTDOWN-REDIS
args: ['rm', '--force', 'redisdb']
- name: gcr.io/cloud-builders/docker
id: SHUTDOWN-DATASTORE_EMULATOR
args: ['rm', '--force', 'datastore']
- name: 'gcr.io/cloud-builders/gcloud'
id: DEPLOY
args:
- "app"
- "deploy"
- "--image-url"
- 'gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME'
- "--verbosity=debug"
images: ['gcr.io/$PROJECT_ID/my-repo:$BRANCH_NAME']
timeout: "1000s"
```
Problem is that copied files `.env` and `app.yaml` are not presented in `workspace`
I don't know why cloudbuild ignore these files from image, because I've printed `ls -a` and have seen that files are copied properly during build, but they disappear during run-tests stage and also I can't deploy without app.yaml
Any help pleaseee | 2020/08/06 | [
"https://Stackoverflow.com/questions/63283368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11993534/"
] | Here is a working example of how you would attach the value of a configuration trait to another pallet's storage item.
Pallet 1
--------
Here is `pallet_1` which has the storage item we want to use.
>
> NOTE: This storage is marked `pub` so it is accessible outside the pallet.
>
>
>
```rust
use frame_support::{decl_module, decl_storage};
use frame_system::ensure_signed;
pub trait Trait: frame_system::Trait {}
decl_storage! {
trait Store for Module<T: Trait> as TemplateModule {
pub MyStorage: u32;
}
}
decl_module! {
pub struct Module<T: Trait> for enum Call where origin: T::Origin {
#[weight = 0]
pub fn set_storage(origin, value: u32) {
let _ = ensure_signed(origin)?;
MyStorage::put(value);
}
}
}
```
Pallet 2
--------
Here is `pallet_2` which has a configuration trait that we want to populate with the storage item from `pallet_1`:
```rust
use frame_support::{decl_module, dispatch, traits::Get};
use frame_system::ensure_signed;
pub trait Trait: frame_system::Trait {
type MyConfig: Get<u32>;
}
decl_module! {
pub struct Module<T: Trait> for enum Call where origin: T::Origin {
#[weight = 0]
pub fn do_something(origin) -> dispatch::DispatchResult {
let _ = ensure_signed(origin)?;
let _my_config = T::MyConfig::get();
Ok(())
}
}
}
```
Runtime Configuration
---------------------
These two pallets are very straightforward and work separately. But if we want to connect them, we need to configure our runtime:
```rust
use frame_support::traits::Get;
impl pallet_1::Trait for Runtime {}
pub struct StorageToConfig;
impl Get<u32> for StorageToConfig {
fn get() -> u32 {
return pallet_1::MyStorage::get();
}
}
impl pallet_2::Trait for Runtime {
type MyConfig = StorageToConfig;
}
// We also update the `construct_runtime!`, but that is omitted for this example.
```
Here we have defined a struct `StorageToConfig` which implements the `Get<u32>` trait that is expected by `pallet_2`. This struct tells the runtime when `MyConfig::get()` is called, it should then call `pallet_1::MyStorage::get()` which reads into runtime storage and gets that value.
So now, every call to `T::MyConfig::get()` in `pallet_2` will be a storage read, and will get whatever value is set in `pallet_1`.
Let me know if this helps! | It is actually as creating a trait impl the struct and then in the runtime pass the struct to the receiver (by using the trait), what I did to learn this is to look at all of the pallets that are already there and see how the pass information
for instance this trait in authorship
<https://github.com/paritytech/substrate/blob/640dd1a0a44b6f28af1189f0293ab272ebc9d2eb/frame/authorship/src/lib.rs#L39>
is implmented here
<https://github.com/paritytech/substrate/blob/77819ad119f23a68b7478f3ac88e6c93a1677fc1/frame/aura/src/lib.rs#L148>
and here it is composed (not with aura impl but session)
<https://github.com/paritytech/substrate/blob/549050b7f1740c90855e777daf3f9700750ad7ff/bin/node/runtime/src/lib.rs#L363>
you should also read this [https://doc.rust-lang.org/book/ch10-02-traits.html#:~:text=A%20trait%20tells%20the%20Rust,type%20that%20has%20certain%20behavior](https://doc.rust-lang.org/book/ch10-02-traits.html#:%7E:text=A%20trait%20tells%20the%20Rust,type%20that%20has%20certain%20behavior) | 637 |
30,902,443 | I'm using vincent a data visualization package. One of the inputs it takes is path to data.
(from the documentation)
```
`geo_data` needs to be passed as a list of dicts with the following
| format:
| {
| name: data name
| url: path_to_data,
| feature: TopoJSON object set (ex: 'countries')
| }
|
```
I have a topo.json file on my computer, but when I run that in, ipython says loading failed.
```
map=r'C:\Users\chungkim271\Desktop\DC housing\dc.json'
geo_data = [{'name': 'DC',
'url': map,
'feature': "collection"}]
vis = vincent.Map(geo_data=geo_data, scale=1000)
vis
```
Do you know if vincent only takes url addresses, and if so, what is the quickest way i can get an url address for this file?
Thanks in advance | 2015/06/17 | [
"https://Stackoverflow.com/questions/30902443",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4682755/"
] | It seems that you're using it in Jupyter Notebook. If no, my reply is irrelevant for your case.
AFAIK, vincent needs this topojson file to be available through web server (so javascript from your browser will be able to download it to build the map). If the topojson file is somewhere in the Jupyter root dir then it's available (and you can provide relative path to it), otherwise it's not.
To determine relative path you can use something like this:
```
import os
relpath = os.path.relpath('abs-path-to-geodata', os.path.abspath(os.path.curdir))
``` | I know that this post is old, hopefully this helps someone. I am not sure what map you are looking for, but here is the URL for the world map
```
world_topo="https://raw.githubusercontent.com/wrobstory/vincent_map_data/master/world-countries.topo.json"
```
and the USA state maps
```
state_topo = "https://raw.githubusercontent.com/wrobstory/vincent_map_data/master/us_states.topo.json"
```
I got this working beautifully, hope this is helpful for someone! | 638 |
17,975,795 | I'm sure this must be simple, but I'm a python noob, so I need some help.
I have a list that looks the following way:
```
foo = [['0.125', '0', 'able'], ['', '0.75', 'unable'], ['0', '0', 'dorsal'], ['0', '0', 'ventral'], ['0', '0', 'acroscopic']]
```
Notice that every word has 1 or 2 numbers to it. I want to substract number 2 from number 1 and then come with a **dictionary** that is: word, number.
Foo would then look something like this:
```
foo = {'able','0.125'},{'unable', '-0.75'}...
```
it tried doing:
```
bar=[]
for a,b,c in foo:
d=float(a)-float(b)
bar.append((c,d))
```
But I got the error:
```
ValueError: could not convert string to float:
``` | 2013/07/31 | [
"https://Stackoverflow.com/questions/17975795",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2001008/"
] | `''` cannot be converted to string.
```
bar = []
for a,b,c in foo:
d = float(a or 0) - float(b or 0)
bar.append((c,d))
```
However, that will not make a dictionary. For that you want:
```
bar = {}
for a,b,c in foo:
d = float(a or 0)-float(b or 0)
bar[c] = d
```
Or a shorter way using dictionary comprehensions:
```
bar = {sublist[2]: float(sublist[0] or 0) - float(sublist[1] or 0) for sublist in foo}
``` | Add a condition to verify if the string is empty like that '' and convert it to 0 | 639 |
21,188,579 | I'm stuck in a exercice in python where I need to convert a DNA sequence into its corresponding amino acids. So far, I have:
```
seq1 = "AATAGGCATAACTTCCTGTTCTGAACAGTTTGA"
for i in range(0, len(seq), 3):
print seq[i:i+3]
```
I need to do this without using dictionaries, and I was going for replace, but it seems it's not advisable either. How can I achieve this?
And it's supposed to give something like this, for exemple:
```
>seq1_1_+
TQSLIVHLIY
>seq1_2_+
LNRSFTDSST
>seq1_3_+
SIADRSLTHLL
```
Update 2: OK, so i had to resort to functions, and as suggested, i have gotten the output i wanted. Now, i have a series of functions, which return a series of aminoacid sequences, and i want to get an output file that looks like this, for exemple:
```
>seq1_1_+
iyyslrs-las-smrlssiv-m
>seq1_2_+
fiirydrs-ladrcgshrssk
>seq1_3_+
llfativas-lidaalidrl
>seq1_1_-
frrsmraasis-lativannkm
>seq1_2_-
lddr-ephrsas-lrs-riin
>seq1_3_-
-tidesridqlasydrse--m
```
For that, i'm using this:
```
for x in f1:
x = x.strip()
if x.count("seq"):
f2.write((x)+("_1_+\n"))
f2.write((x)+("_2_+\n"))
f2.write((x)+("_3_+\n"))
f2.write((x)+("_1_-\n"))
f2.write((x)+("_2_-\n"))
f2.write((x)+("_3_-\n"))
else:
f2.write((translate1(x))+("\n"))
f2.write((translate2(x))+("\n"))
f2.write((translate3(x))+("\n"))
f2.write((translate1neg(x))+("\n"))
f2.write((translate2neg(x))+("\n"))
f2.write((translate3neg(x))+("\n"))
```
But unlike the expected output file suggested, i get this:
```
>seq1_1_+
>seq1_2_+
>seq1_3_+
>seq1_1_-
>seq1_2_-
>seq1_3_-
iyyslrs-las-smrlssiv-m
fiirydrs-ladrcgshrssk
llfativas-lidaalidrl
frrsmraasis-lativannkm
lddr-ephrsas-lrs-riin
-tidesridqlasydrse--m
```
So he's pretty much doing all the seq's first, and all the functions afterwards, so i need to intercalate them, problem is how. | 2014/01/17 | [
"https://Stackoverflow.com/questions/21188579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2884400/"
] | To translate you need a table of [codons](http://en.wikipedia.org/wiki/DNA_codon_table), so without dictionary or other data structure seems strange.
Maybe you can look into [biopython](http://biopython.org/DIST/docs/tutorial/Tutorial.html#sec26)? And see how they manage it.
You can also translate directly from the coding strand DNA sequence:
```
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import IUPAC
>>> coding_dna = Seq("ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG", IUPAC.unambiguous_dna)
>>> coding_dna
Seq('ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG', IUPACUnambiguousDNA())
>>> coding_dna.translate()
Seq('MAIVMGR*KGAR*', HasStopCodon(IUPACProtein(), '*')) "
```
You may take a look [into](https://github.com/biopython/biopython/blob/f0658115607dacb602de58e2438021d46d3c433b/Tests/test_Seq_objs.py) | You cannot practically do this without either a function or a dictionary. Part 1, converting the sequence into three-character codons, is easy enough as you have already done it.
But Part 2, to convert these into amino acids, you will need to define a mapping, either:
```
mapping = {"NNN": "X", ...}
```
or
```
def mapping(codon):
if codon in ("AGA", "AGG", "CGA", "CGC", "CGG", "CGT"):
return "R"
...
```
or
```
for codon, acid in [("CAA", "Q"), ("CAG", "Q"), ...]:
```
I would favour the second of these as it has the least duplication (and therefore potential for error). | 643 |
37,986,367 | How I can overcome an issue with conditionals in python? The issue is that it should show certain text according to certain conditional, but if the input was No, it anyway indicates the data of Yes conditional.
```
def main(y_b,c_y):
ans=input('R u Phil?')
if ans=='Yes' or 'yes':
years=y_b-c_y
print('U r',abs(years),'jahre alt')
elif ans=='No' or 'no':
print("How old r u?")
else:
print('Sorry')
main(2012,2016)
``` | 2016/06/23 | [
"https://Stackoverflow.com/questions/37986367",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6492505/"
] | `or` is inclusive. So the `yes` test will always pass because when `ans != 'Yes'` the other condition `yes` has a truthy value.
```
>>> bool('yes')
True
```
You should instead test with:
```
if ans in ('Yes', 'yeah', 'yes'):
# code
elif ans in ('No', 'Nah', 'no'):
# code
else:
# more code
``` | When you write if statements and you have multiple conditionals, you have to write both conditionals and compare them. This is wrong:
```
if ans == 'Yes' or 'yes':
```
and this is ok:
```
if ans == 'Yes' or ans == 'yes':
``` | 648 |
49,021,968 | I have a list of filenames in a directory and I'd like to keep only the latest versions. The list looks like:
`['file1-v1.csv', 'file1-v2.csv', 'file2-v1.txt', ...]`.
I'd like to only keep the newest csv files as per the version (part after `-` in the filename) and the txt files.
The output would be `[''file1-v2.csv', 'file2-v1.txt', ...]`
I have a solution that requires the use of sets but I'm looking for a easy pythonic way to do this. Potentially using `itertools` and `groupby`
**Update: Solution so far**
I've been able to do some preliminary work to get a list like
```
lst = [('file1', 'csv', 'v1','<some data>'), ('file2', 'csv', 'v2','<some data>'), ...]
```
I'd like to group by elements at index `0` and `1` but provide only the tuple with the maximum index `2`.
It may be something like the below:
```
files = list(item for key, group in itertools.groupby(files, lambda x: x[0:2]) for item in group)
# Maximum over 3rd index element in each tuple does not work
files = max(files, key=operator.itemgetter(2))
```
Also, I feel like the below should work but it does not select the maximum properly
```
[max(items, key=operator.itemgetter(2)) for key, items in itertools.groupby(files, key=operator.itemgetter(0, 1))]
``` | 2018/02/28 | [
"https://Stackoverflow.com/questions/49021968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2771315/"
] | Do this:
```
SELECT * FROM yourTable
WHERE DATE(punch_in_utc_time)=current_date;
```
For testing:
```
SELECT DATE("2018-02-28 09:32:00")=current_date;
```
See [DEMO on SQL Fiddle](http://sqlfiddle.com/#!9/9eecb/17666). | Should be able to do that using Date function, TRUNC timestamp to date then compare with the date field.
```
SELECT DATE("2018-02-28 09:32:00") = "2018-02-28";
```
The above dml will return 1 since the date part is equal. | 651 |
56,227,936 | I am getting the following error when I try to see if my object is valid using `full_clean()`.
```sh
django.core.exceptions.ValidationError: {'schedule_date': ["'%(value)s' value has an invalid format. It must be in YYYY-MM-DD HH:MM[:ss[.uuuuuu]][TZ] format."]}
```
I have tried all the formats recommended here, but none of them work for me:
[Whats the correct format for django dateTime?](https://stackoverflow.com/questions/12255157/whats-the-correct-format-for-django-datetime)
I won't get error when I create the object like `Object.objects.create(...)`
Here is my `models.py`:
```py
from datetime import datetime, timedelta, date
from django.db import models
from django import forms
from django.utils import timezone
from django.core.exceptions import ValidationError
from userstweetsmanager.constants import LANGUAGE_CHOICES
def password_validator(value):
if len(value) < 6:
raise ValidationError(
str('is too short (minimum 6 characters)'),
code='invalid'
)
class User(models.Model):
name = models.TextField(max_length=30, unique=True)
password = models.TextField(validators=[password_validator])
twitter_api_key = models.TextField(null=True, blank=True)
twitter_api_secret_key = models.TextField(null=True, blank=True)
twitter_access_token = models.TextField(null=True, blank=True)
twitter_access_token_secret = models.TextField(null=True, blank=True)
expire_date = models.DateField(default=date.today() + timedelta(days=14))
language = models.TextField(choices=LANGUAGE_CHOICES, default='1')
def schedule_date_validator(value):
if value < timezone.now() or timezone.now() + timedelta(days=14) < value:
raise ValidationError(
str('is not within the range (within 14 days from today)'),
code='invalid'
)
def content_validator(value):
if len(value) > 140:
raise ValidationError(
str('is too long (maximum 140 characters)'),
code='invalid'
)
class Tweet(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
content = models.TextField(validators=[content_validator])
schedule_date = models.DateTimeField(validators=[schedule_date_validator])
```
Here is my test code where the error occurs:
```py
def test_valid_tweet(self):
owner = User.objects.get(name="Hello")
tweet = Tweet(user=owner, content="Hello world!", schedule_date=timezone.now())
try:
tweet.full_clean() # error occurs here
pass
except ValidationError as e:
raise AssertionError("ValidationError should not been thrown")
tweet.save()
self.assertEqual(len(Tweet.objects.all()), 1)
```
As I tested creating an object in the `python manage.py shell`, it will cause error, but if I do `full_clean()`, it will cause error. | 2019/05/20 | [
"https://Stackoverflow.com/questions/56227936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9633315/"
] | The issue was with the logic of the code. I specified the time range that won't allow even one millionth second difference in the `schedule_date` and `timezone.now()`
After taking a look at the source code of `DateTimeField`, it seems like if I have my validator to throw code="invalid", it will just show the above error message, which made me confused about where my code is wrong. | i solve this problem with this
```
datetime.strptime(request.POST['date'], "%Y-%m-%dT%H:%M")
``` | 654 |
64,799,578 | I am working on a python script, where I will be passing a directory, and I need to get all log-files from it. Currently, I have a small script which watches for any changes to these files and then processes that information.
It's working good, but it's just for a single file, and hardcoded file value. How can I pass a directory to it, and still watch all the files. My confusion is since I am working on these files in a while loop which should always stay running, how can I do that for n number of files inside a directory?
Current code :
```
import time
f = open('/var/log/nginx/access.log', 'r')
while True:
line = ''
while len(line) == 0 or line[-1] != '\n':
tail = f.readline()
if tail == '':
time.sleep(0.1) # avoid busy waiting
continue
line += tail
print(line)
_process_line(line)
```
Question was already tagged for duplicate, but the requirement is to get changes line by line from all files inside directory. Other questions cover single file, which is already working. | 2020/11/12 | [
"https://Stackoverflow.com/questions/64799578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1510701/"
] | You could use a generic / reusable approach based on the two-queries approach.
One SQL query to retrieve the entities' `IDs` and a second query with an `IN` predicate including the `IDs` from the second query.
Implementing a custom Spring Data JPA Executor:
```
@NoRepositoryBean
public interface AsimioJpaSpecificationExecutor<E, ID extends Serializable> extends JpaSpecificationExecutor<E> {
Page<ID> findEntityIds(Pageable pageable);
}
public class AsimioSimpleJpaRepository<E, ID extends Serializable> extends SimpleJpaRepository<E, ID>
implements AsimioJpaSpecificationExecutor<E, ID> {
private final EntityManager entityManager;
private final JpaEntityInformation<E, ID> entityInformation;
public AsimioSimpleJpaRepository(JpaEntityInformation<E, ID> entityInformation, EntityManager entityManager) {
super(entityInformation, entityManager);
this.entityManager = entityManager;
this.entityInformation = entityInformation;
}
@Override
public Page<ID> findEntityIds(Pageable pageable) {
CriteriaBuilder criteriaBuilder = this.entityManager.getCriteriaBuilder();
CriteriaQuery<ID> criteriaQuery = criteriaBuilder.createQuery(this.entityInformation.getIdType());
Root<E> root = criteriaQuery.from(this.getDomainClass());
// Get the entities ID only
criteriaQuery.select((Path<ID>) root.get(this.entityInformation.getIdAttribute()));
// Update Sorting
Sort sort = pageable.isPaged() ? pageable.getSort() : Sort.unsorted();
if (sort.isSorted()) {
criteriaQuery.orderBy(toOrders(sort, root, criteriaBuilder));
}
TypedQuery<ID> typedQuery = this.entityManager.createQuery(criteriaQuery);
// Update Pagination attributes
if (pageable.isPaged()) {
typedQuery.setFirstResult((int) pageable.getOffset());
typedQuery.setMaxResults(pageable.getPageSize());
}
return PageableExecutionUtils.getPage(typedQuery.getResultList(), pageable,
() -> executeCountQuery(this.getCountQuery(null, this.getDomainClass())));
}
protected static long executeCountQuery(TypedQuery<Long> query) {
Assert.notNull(query, "TypedQuery must not be null!");
List<Long> totals = query.getResultList();
long total = 0L;
for (Long element : totals) {
total += element == null ? 0 : element;
}
return total;
}
}
```
You can read more at <https://tech.asimio.net/2021/05/19/Fixing-Hibernate-HHH000104-firstResult-maxResults-warning-using-Spring-Data-JPA.html> | I found a workaround myself. Based upon this:
[How can I avoid the Warning "firstResult/maxResults specified with collection fetch; applying in memory!" when using Hibernate?](https://stackoverflow.com/questions/11431670/how-can-i-avoid-the-warning-firstresult-maxresults-specified-with-collection-fe/46195656#46195656)
**First: Get the Ids by pagination:**
```
@Query(value = "select distinct r.id from Reference r " +
"inner join r.persons " +
"left outer join r.categories " +
"left outer join r.keywords " +
"left outer join r.parentReferences " +
"order by r.id",
countQuery = "select count(distinct r.id) from Reference r " +
"inner join r.persons " +
"left outer join r.categories " +
"left outer join r.keywords " +
"left outer join r.parentReferences " +
"order by r.id")
Page<UUID> findsAllRelevantEntriesIds(Pageable pageable);
```
**Second: Use the Ids to do an `in` query**
```
@Query(value = "select distinct r from Reference r " +
"inner join fetch r.persons " +
"left outer join fetch r.categories " +
"left outer join fetch r.keywords " +
"left outer join fetch r.parentReferences " +
"where r.id in ?1 " +
"order by r.id",
countQuery = "select count(distinct r.id) from Reference r " +
"inner join r.persons " +
"left outer join r.categories " +
"left outer join r.keywords " +
"left outer join r.parentReferences ")
@QueryHints(value = {@QueryHint(name = "hibernate.query.passDistinctThrough", value = "false")},
forCounting = false)
List<Reference> findsAllRelevantEntriesByIds(UUID[] ids);
```
**Note:**
I get a `List<Reference` not a `Pageable` so you have to build your `Pageable` on your own like so:
```
private Page<Reference> processResults(Pageable pageable, Page<UUID> result) {
List<Reference> references = referenceRepository.findsAllRelevantEntriesByIds(result.toList().toArray(new UUID[0]));
return new PageImpl<>(references, pageable, references.size());
}
```
This looks not nice and does two statements, but it queries with `limit`, so only the needed records get fetched. | 655 |
28,814,455 | I am appending a file via python based on the code that has been input by the user.
```
with open ("markbook.txt", "a") as g:
g.write(sn+","+sna+","+sg1+","+sg2+","+sg3+","+sg4)
```
`sn`, `sna`, `sg1`, `sg2`, `sg3`, `sg4` have all been entered by the user and when the program is finished a line will be added to the `'markbook.txt'` file in the format of:
```
00,SmithJE,a,b,b,b
01,JonesFJ,e,d,c,d
02,BlairJA,c,c,b,a
03,BirchFA,a,a,b,c
```
The issue is when the program is used again and the file is appended further, the new line is simply put on the end of the previous line. How do I place the appended text below the previous line? | 2015/03/02 | [
"https://Stackoverflow.com/questions/28814455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4624147/"
] | Add a "\n" to the end of the write line.
So:
```
g.write(sn+","+sna+","+sg1+","+sg2+","+sg3+","+sg4+"\n")
``` | You're missing the new line character at the end of your string. Also, though string concatenation is completely fine in this case, you should be aware that Python has alternative options for formatting strings.
```
with open('markbook.txt', 'a') as g:
g.write('{},{},{},{},{},{}\n'
.format(sn, sna, sg1, sg2, sg3, sg4))
``` | 657 |
37,518,997 | My question is related to this earlier question - [Python subprocess usage](https://stackoverflow.com/questions/17242828/python-subprocess-and-running-a-bash-script-with-multiple-arguments)
I am trying to run this command using python
**nccopy -k 4 "<http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]>" foo.nc**
When I run the above command I should be able to see a file called foo.nc on my disk or a network error stating unable to access that URL or remote URL not found.
Currently the ESRL NOAA server is down - so when I run the above command I get
syntax error, unexpected $end, expecting SCAN\_ATTR or SCAN\_DATASET or SCAN\_ERROR
context: ^
NetCDF: Access failure
Location: file nccopy.c; line 1348
I should get the same error when I run the python script
This is the code I have and I am unable to figure out exactly how to proceed further -
I tried splitting up "-k 4" into two arguments and removing the quotes and I still get this error nccopy : invalid format : 4
Results of print(sys.argv) data.py
['data.py', '-k', '4', '<http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[480:603][20:34][26:40]>', 'foo.nc']
```
import numpy as np
import subprocess
import sys
url = '"http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]"'
outputFile = 'foo.nc'
arg1 = "-k 4"
arg3 = url
arg4 = outputFile
print (input)
subprocess.check_call(["nccopy",arg1,arg3,arg4])
``` | 2016/05/30 | [
"https://Stackoverflow.com/questions/37518997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4033876/"
] | There's two dilemmas here.
One being that subprocess processes your arguments and tries to use `4` as a separate argument.
The other being that system calls still goes under normal shell rules, meaning that parameters and commands will be parsed for [metacharacters](http://www.tutorialspoint.com/unix/unix-quoting-mechanisms.htm) aka special characters. In this case you're wrapping `[` and `]`.
There for you need to separate each parameters and it's value into separate objects in the parameter-list, for instance `-k 4` should be `['-k', '4']` and you need to wrap parameters/values in `'...'` instead of `"..."`.
Try this, `shlex.split()` does the grunt work for you, and i swapped the encapsulation characters around the URL:
```
import numpy as np
import subprocess
import sys
import shlex
url = "'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]'"
outputFile = 'foo.nc'
command_list = shlex.split('nccopy -k 4 ' + url + ' ' + outpufFile)
print(command_list)
subprocess.check_call(command_list)
``` | Instead of arg1 = "-k 4", use two arguments instead.
```
import subprocess
url = 'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.2014.nc?air[408:603][2][20:34][26:40]'
outputFile = 'foo.nc'
arg1 = "-k"
arg2 = "4"
arg3 = url
arg4 = outputFile
subprocess.check_call(["nccopy", arg1, arg2, arg3, arg4])
```
See also here [Python subprocess arguments](https://stackoverflow.com/questions/11679936/python-subprocess-arguments) | 659 |
54,677,761 | The following code generates the warning in tensorflow r1.12 python API:
```
#!/usr/bin/python3
import tensorflow as tf
M = tf.keras.models.Sequential();
M.add(tf.keras.layers.Dense(2));
```
The complete warning text is this:
```
WARNING: Logging before flag parsing goes to stderr.
W0213 15:50:07.239809 140701996246848 deprecation.py:506] From /home/matias/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1253: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
```
I have tried different approaches like initializing and calling a kernel initializer before adding Dense layer and passing it to Dense constructor, but it seems to not change anything. Is this warning inevitable? A 'yes' as an answer would be enough for me. | 2019/02/13 | [
"https://Stackoverflow.com/questions/54677761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7906266/"
] | You are running tensor flow 2.0 and it looks like VarianceScaling.**init** is deprecated. It might mean that Sequential will need to be more explicitly initialized in the future.
for example:
```py
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
``` | This is just a warning based on the [changes in Tensorflow 2.0](https://www.tensorflow.org/beta/guide/effective_tf2).
If you don't want to see these warnings, upgrade to TensorFlow 2.0. You can install the beta version via pip:
```
pip install tensorflow==2.0.0-beta1
``` | 662 |
2,641,665 | I've got a Django app that accepts uploads from [jQuery uploadify](http://www.uploadify.com/), a jQ plugin that uses flash to upload files and give a progress bar.
Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below.
```
exceptions.IOError
request data read error
File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post
self._load_post_and_files()
File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files
self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\'])
File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload
return parser.parse()
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse
for chunk in field_stream:
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next
output = self._producer.next()
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next
for bytes in stream:
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next
output = self._producer.next()
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next
data = self.flo.read(self.chunk_size)
File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read
return self._file.read(num_bytes)
```
When running locally on the Django development server, big files work.
I've tried setting my `FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",)` in case it was the memory upload handler, but it made no difference.
Does anyone know how to fix this? | 2010/04/14 | [
"https://Stackoverflow.com/questions/2641665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/246265/"
] | Html.DropDownList() accepts a SelectList as a parameter which has a SelectedValue property. Specify the selected item when you create the SelectList and pass the SelectList to the Html.DropDownList(). | Here's an example that has 7 drop downs on the page, each with the same 5 options. Each drop down can have a different option selected.
In my view, I have the following code inside my form:
```
<%= Html.DropDownListFor(m => m.ValueForList1, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList2, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList3, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList4, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList5, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList6, Model.AllItems)%>
<%= Html.DropDownListFor(m => m.ValueForList7, Model.AllItems)%>
```
Then I have a viewmodel like this:
```
public class HomePageViewModel
{
public List<SelectListItem> AllItems { get; set; }
public string ValueForList1 { get; set; }
public string ValueForList2 { get; set; }
public string ValueForList3 { get; set; }
public string ValueForList4 { get; set; }
public string ValueForList5 { get; set; }
public string ValueForList6 { get; set; }
public string ValueForList7 { get; set; }
public HomePageViewModel()
{
AllItems = new List<SelectListItem>
{
new SelectListItem {Text = "First", Value = "First"},
new SelectListItem {Text = "Second", Value = "Second"},
new SelectListItem {Text = "Third", Value = "Third"},
new SelectListItem {Text = "Fourth", Value = "Fourth"},
new SelectListItem {Text = "Fifth", Value = "Fifth"},
};
}
}
```
Now in your controller method, declared like this:
```
public ActionResult Submit(HomePageViewModel viewModel)
```
The value for viewModel.ValueForList1 will be set to the selected value.
Of course, I'd suggest using some kind of enum or ids from a database as your value. | 664 |
73,662,597 | I have setup Glue Interactive sessions locally by following <https://docs.aws.amazon.com/glue/latest/dg/interactive-sessions.html>
However, I am not able to add any additional packages like HUDI to the interactive session
There are a few magic commands to use but not sure which one is apt and how to use
```
%additional_python_modules
%extra_jars
%extra_py_files
``` | 2022/09/09 | [
"https://Stackoverflow.com/questions/73662597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19958017/"
] | It is a bit hard t understand what problem you are actually facing, as this is very basic SQL.
Use `EXISTS`:
```
select *
from a
where type = 'F'
and exists (select null from b where b.id = a.id and dt >= date '2022-01-01');
```
Or `IN`:
```
select *
from a
where type = 'F'
and id in (select id from b where dt >= date '2022-01-01');
```
Or, as the IDs are unique in both tables, join:
```
select a.*
from a
join b on b.id = a.id
where a.type = 'F'
and b.dt >= date '2022-01-01';
```
My favorite here is the `IN` clause, because you want to select data from table A where conditions are met. So no join needed, just a where clause, and `IN` is easier to read than `EXISTS`. | ```
SELECT *
FROM A
WHERE type='F'
AND id IN (
SELECT id
FROM B
WHERE DATE>='2022-01-01'; -- '2022' imo should be enough, need to check
);
```
I don't think joining is necessary. | 667 |
48,497,092 | I implement multiple linear regression from scratch but I did not find slope and intercept, gradient decent give me nan value.
Here is my code and I also give ipython notebook file.
<https://drive.google.com/file/d/1NMUNL28czJsmoxfgeCMu3KLQUiBGiX1F/view?usp=sharing>
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
x = np.array([[ 1, 2104, 3],
[ 1, 1600, 3],
[ 1, 2400, 3],
[ 1, 1416, 2],
[ 1, 3000, 4],
[ 1, 1985, 4]])
y = np.array([399900, 329900, 369000, 232000, 539900, 299900])
def gradient_runner(x, y, altha, b, theta1, theta2):
initial_m1 = 0
initial_m2 = 0
initial_b = 0
N = len(x)
for i in range(0, len(y)):
x0 = x[i, 0]
x1 = x[i, 1]
x2 = x[i, 2]
yi = y[i]
h_theta = (theta1 * x1 + theta2 * x2 + b)
initial_b += -(1/N) * x0 * (yi - h_theta)
initial_m1 += -(1/N) * x1 * (yi - h_theta)
initial_m2 += -(1/N) * x2 * (yi - h_theta)
new_b = b - (altha * initial_b)
new_m1 = theta1 - (altha * initial_m1)
new_m2 = theta2 - (altha * initial_m2)
return new_b, new_m1, new_m2
def fit(x, y, alpha, iteration, b, m1, m2):
for i in range(0, iteration):
b, m1, m2 = gradient_runner(x, y, alpha, b, m1, m2)
return b, m1, m2
fit(x,y, 0.001, 1500, 0,0,0)
``` | 2018/01/29 | [
"https://Stackoverflow.com/questions/48497092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5107898/"
] | This is not a programming issue, but an issue of your function. [Numpy can use different data types](https://docs.scipy.org/doc/numpy-1.10.1/user/basics.types.html). In your case it uses float64. You can check the largest number, you can represent with this data format:
```
>>>sys.float_info
>>>sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308,
min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15,
mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
```
Unfortunately, your iteration is not convergent for `b, m1, m2`, at least not with the provided data set. In iteration 83 the values become too large to be represented as a float, which are displayed as `inf` and `-inf` for infinity. When this is fed into the next iterative step, Python returns `NaN` for "not a number".
Though there are ways in Python to overcome limitations of float number representation in terms of precision, this is not a strategy you have to explore. The problem here is that your fit function is not convergent. Whether this is due to the function itself, its implementation by you or the chosen initial guesses, I can't decide. A common reason for non-convergent fit behaviour is also, that the data set doesn't represent the fit function. | try scaling your x
```py
def scale(x):
for j in range(x.shape[1]):
mean_x = 0
for i in range(len(x)):
mean_x += x[i,j]
mean_x = mean_x / len(x)
sum_of_sq = 0
for i in range(len(x)):
sum_of_sq += (x[i,j] - mean_x)**2
stdev = sum_of_sq / (x.shape[0] -1)
for i in range(len(x)):
x[i,j] = (x[i,j] - mean_x) / stdev
return x
```
or you can use a pre defined standard scaler | 668 |
70,964,456 | I had an issue like this on my Nano:
```
profiles = [ SERIAL_PORT_PROFILE ],
File "/usr/lib/python2.7/site-packages/bluetooth/bluez.py", line 176, in advertise_service
raise BluetoothError (str (e))
bluetooth.btcommon.BluetoothError: (2, 'No such file or directory')
```
I tried adding compatibility mode in the bluetooth.service file, reloading daemon, restarting bluetooth and then adding a serial port by doing
```
sudo sdptool add SP
```
These steps work fine on my ubuntu 20.04 laptop, but on jetpack 4.5.1, they donβt. And I checked also, they donβt work on jetson NX either.
I am really curious on how to solve this issue, otherwise, another way to use bluetooth inside a python code is welcomed.
Thanks | 2022/02/03 | [
"https://Stackoverflow.com/questions/70964456",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18104741/"
] | You might want to have a look at the following article which shows how to do the connection with core Python Socket library
<https://blog.kevindoran.co/bluetooth-programming-with-python-3/>.
The way BlueZ does this now is with the [Profile](https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc/profile-api.txt) API.
There is a Python example of using the Profile API at <https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/test/test-profile>
`hciattach`, `hciconfig`, `hcitool`, `hcidump`, `rfcomm`, `sdptool`, `ciptool`, and `gatttool` were [deprecated by the BlueZ](https://git.kernel.org/pub/scm/bluetooth/bluez.git/commit/?id=b1eb2c4cd057624312e0412f6c4be000f7fc3617) project in 2017. If you are following a tutorial that uses them, there is a chance that it might be out of date and that Linux systems will choose not to support them. | The solution was in the path of the bluetooth configuration file (inspired from this <https://developer.nvidia.com/embedded/learn/tutorials/connecting-bluetooth-audio>)
this answer : [bluetooth.btcommon.BluetoothError: (2, 'No such file or directory')](https://stackoverflow.com/questions/36675931/bluetooth-btcommon-bluetootherror-2-no-such-file-or-directory)
is not enough for jetson devices (jetpack). Although I didn't test if it works without changing the file mentioned in this link.
There is a `.conf` file that needs to be changed also : `/lib/systemd/system/bluetooth.service.d/nv-bluetooth-service.conf`
modify :
```
ExecStart=/usr/lib/bluetooth/bluetoothd -d --noplugin=audio,a2dp,avrcp
```
to :
```
ExecStart=/usr/lib/bluetooth/bluetoothd -C
```
after that it is necessary to do:
```
sudo systemctl daemon-reload
sudo systemctl restart bluetooth
```
Tested on jetson Nano and NX with jetpach 4.5.1
Thanks for the help ! | 669 |
24,879,641 | I've been looking everywhere for a step-by-step explanation for how to set up the following on an EC2 instance. For a new user I want things to be clean and correct but all of the 'guides' have different information and are really confusing.
My first thought is that I need to do the following
* Upgrade to latest version of Python2.7(finding the download is easy but installing on linux isn't clear)
* Add Pip
* Add Easy\_Install
* Add Virtualenv
* Change default Python to be 2.7 instead of 2.x
* Install other packages(mechanize, beautifulsoup, etc in virtualenv)
Things that are unclear:
* Do I need yum? Is that there by default?
* Do I need to update .bashrc with anything?
* What is the 'preferred' method of installing additional python packages? How can I make sure I've done it right? is `sudo pip package_name` enough?
* What am I missing?
* when do I use sudo vs not?
* Do I need to add a site-packages directory or is that done by default? Why/why not? | 2014/07/22 | [
"https://Stackoverflow.com/questions/24879641",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3195487/"
] | I assume you may be unfamiliar with EC2, so I suggest going through this [FAQ](https://wiki.debian.org/Amazon/EC2/FAQ) before continuing with deploying an EC2 instance to run your Python2.7 application.
Anyway, now that you are somewhat more familiar with that, here's how I normally deploy a one-off instance through the EC2 web-interface in brief:
1. Log into the EC2 Dashboard with your credentials
2. Select the Launch Instance button
3. Pick a modern Linux distribution (since `sudo` is a \*nix command)
4. Select the specifications needed based on needs/costs.
5. Deploy the instance
6. Once the instance is started, log into the console as per the connect instructions for a standalone SSH client (select the running instance, then select the Connect button).
7. Once logged into the server using ssh you may administer that as a standard headless Linux server system.
My recommendation is rather than spending money (unless you are eligible for the free tier) on running an EC2 instance to learn all this, I suggest downloading VirtualBox or VMWare Player and play and learn with a locally running Linux image on your machine.
Now for your unclear bits: They are not much different than normal environments.
1. `yum` is a package management system built on top of `RPM`, or RedHat Package Manager. If you use other distributions they may have different package managers. For instance, other common server distributions like Debian and Ubuntu they will have `aptitude` or `apt-get`, ArchLinux will have `pacman`.
Also, in general you can just rely on the distro's python packages which you can just install using `[sudo] yum install python27` or `[sudo] apt-get install python-2.7`, depending on the Linux distribution that is being used.
2. `.bashrc` controls settings for your running shell, generally it won't do anything for your server processes. So no, you may safely leave that alone if you are following best practices for working with Python (which will follow).
3. Best practices generally is to have localized environments using `virtualenv` and not install Python packages on the system level.
4. `sudo` is for tasks that require system level (root) privileges. You generally want to avoid using `sudo` unless necessary (such as installing system level packages).
5. No, `virtualenv` should take care of that for you. Since 1.4.1 it distributes its own version of `pip` and it will be installed from there.
So, what you seem to be missing is experience with running Python in a virtualenv. There are [good instructions](http://virtualenv.readthedocs.org/en/latest/) on the package's website that you might want to familiarize yourself with. | A script to build python in case the version you need is not in an available repo:
<https://gist.github.com/AvnerCohen/3e5cbe09bc40231869578ce7cbcbe9cc>
```
#!/bin/bash -e
NEW_VERSION="2.7.13"
CURRENT_VERSION="$(python -V 2>&1)"
if [[ "$CURRENT_VERSION" == "Python $NEW_VERSION" ]]; then
echo "Python $NEW_VERSION already installed, aborting."
exit 1
fi
echo "Starting upgrade from ${CURRENT_VERSION} to ${NEW_VERSION}"
if [ ! -d "python_update" ]; then
mkdir python_update
cd python_update
wget https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz
tar xfz Python-2.7.13.tgz
cd Python-2.7.13/
else
cd python_update
cd Python-2.7.13/
fi
./configure --prefix /usr/local/lib/python2.7.13 --enable-ipv6
make && make install
alternatives --install /usr/bin/python python /usr/local/lib/python2.7.13/bin/python 27130
update-alternatives --refresh python
update-alternatives --auto python
curl --silent --show-error --retry 5 https://bootstrap.pypa.io/get-pip.py | sudo python
ln -sf /usr/local/lib/python2.7.13/bin/pip /usr/bin/pip
pip install -U virtualenv
ln -sf /usr/local/lib/python2.7.13/bin/virtualenv /usr/bin/virtualenv
echo "DONE!"
``` | 670 |
40,145,127 | I'm trying to construct a URL based on what I get from a initial URL.
Example:
*URL1:*
```
http://some-url/rest/ids?configuration_path=project/Main/10-deploy
```
**Response here is** 123
*URL2:*
```
http://abc-bld/download/{RESPONSE_FROM_URL1_HERE}.latest_successful/artifacts/build-info.props
```
so my final URL will be:
```
http://tke-bld/download/123.latest_successful/artifacts/build-info.props
```
**Response here is** Some.Text.here.123
Then I'd like to grab 'Some.Text.here.123' and store it in a variable.
How can I accomplish this with python?
Any help would be much appreciated. Thanks | 2016/10/20 | [
"https://Stackoverflow.com/questions/40145127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5622743/"
] | First how you can import your variable without modifying extra.py, if really want too,
You would have to take help of sys module for getting reference to foo in extra module.
```
import sys
from extra import *
print('1. Foo in globals ? {0}'.format('foo' in globals()))
setfoo()
print('2. Foo in globals ? {0}'.format('foo' in globals()))
# Check if extra has foo in it
print('2. Foo in extra ? {0}'.format(hasattr(sys.modules['extra'], 'foo')))
# Getting foo explicitly from extra module
foo = sys.modules['extra'].foo
print('3. Foo in globals ? {0}'.format('foo' in globals()))
print("Foo={0}".format(foo))
```
Output:
```
1. Foo in globals ? False
2. Foo in globals ? False
2. Foo in extra ? True
3. Foo in globals ? True
Foo=5
```
**Update for later usecase :**
Modifying extra.py which gets importer and updates its global variables,
```
# extra.py
import sys
def use(**kwargs):
_mod = sys.modules['__main__']
for k, v in kwargs.items():
setattr(_mod, k, v)
```
Now importing in any file remains same,
```
#myfile.py
from extra import *
print use(x = 5, y = 8), str(x) + " times " + str(y) + " equals " + str(x*y)
```
Output:
```
None 5 times 8 equals 40
```
`None` appears as use function returns nothing.
Note: It would be better if you choose better pythonic solution for your usecase, unless you are trying to have a little fun with python.
Refer for python scope rules:
[Short Description of the Scoping Rules?](https://stackoverflow.com/questions/291978/short-description-of-scoping-rules?answertab=active#tab-top) | Modules have namespaces which are variable names bound to objects. When you do `from extra import *`, you take the objects found in `extra`'s namespace and bind them to new variables in the new module. If `setfoo` has never been called, then `extra` doesn't have a variable called `foo` and there is nothing to bind in the new module namespace.
Had `setfoo` been called, then `from extra import *` would have found it. But things can still be funky. Suppose some assignment sets `extra.foo` to `42`. Well, the other module namespace doesn't know about that, so in the other module, `foo` would still be `5` but `extra.foo` would be `42`.
Always keep in mind the difference between an object and the things that may be referencing the object at any given time. Objects have no idea which variables or containers happen to reference them (though they do keep a count of the number of references). If a variable or container is rebound to a different object, it doesn't change the binding of other variables or containers. | 671 |
25,585,785 | I'm using python 3.3. Consider this function:
```
def foo(action, log=False,*args) :
print(action)
print(log)
print(args)
print()
```
The following call works as expected:
```
foo("A",True,"C","D","E")
A
True
('C', 'D', 'E')
```
But this one doesn't.
```
foo("A",log=True,"C","D","E")
SyntaxError: non-keyword arg after keyword arg
```
Why is this the case?
Does this somehow introduce ambiguity? | 2014/08/30 | [
"https://Stackoverflow.com/questions/25585785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/888862/"
] | Consider the following:
```
def foo(bar="baz", bat=False, *args):
...
```
Now if I call
```
foo(bat=True, "bar")
```
Where does "bar" go? Either:
* `bar = "bar", bat = True, args = ()`, or
* `bar = "baz", bat = True, args = ("bar",)`, or even
* `bar = "baz", bat = "bar", args = ()`
and there's no obvious choice (at least between the first two) as to which one it should be. We want `bat = True` to 'consume' the second argument slot, but it's not clear which order the remaining arguments should be consumed in: treating it as if `bat` doesn't exist at all and moving everything to the left, or treating it as if `bat` moved the "cursor" past itself on to the next argument. Or, if we wanted to do something truly strange, we could defend the decision to say that the second argument in the argument tuple *always* goes with the second positional argument, whether or not other keyword arguments were passed.
Regardless, we're left with something pretty confusing, and someone is going to be surprised which one we picked regardless of which one it is. Python aims to be simple and clean, and it wants to avoid making any language design choices that might be unintuitive. [There should be one-- and preferably only one --**obvious** way to do it](http://legacy.python.org/dev/peps/pep-0020/). | The function of keyword arguments is twofold:
1. To provide an interface to functions that does not rely on the order of the parameters.
2. To provide a way to reduce ambiguity when passing parameters to a function.
Providing a mixture of keyword and ordered arguments is only a problem when you provide the keyword arguments **before** the ordered arguments. Why is this?
Two reasons:
1. It is confusing to read. If you're providing ordered parameters, why would you label some of them and not others?
2. The algorithm to process the arguments would be needless and complicated. You can provide keyword args after your 'ordered' arguments. This makes sense because it is clear that everything is **ordered** up until the point that you employ keywords. However; if you employ keywords between ordered arguments, there is no clear way to determine whether you are still ordering your arguments. | 673 |
53,965,764 | Hi I'm learning to code in python and thought it would be cool to automate a task I usually do for my room mates. I write out a list of names and the date for each month so that everyone knows whos turn it is for dishes.
Here's my code:
```
def dish_day_cycle(month, days):
print('Dish Cycle For %s:' % month)
dish_list = ['Jen', 'Zack', 'Hector', 'Arron']
days = days + 1
for day in range(1, days):
for i in dish_list:
print('%s %s : %s' % (month, day, i))
```
The problem is that it repeats everyone's name for each and every day, obviously not what I want. I need it to print only one name per day. Not this:
```
>>> dish_day_cycle(month, days)
Dish Cycle For December:
December 1 : Jen
December 1 : Zack
December 1 : Hector
December 1 : Arron
December 2 : Jen
December 2 : Zack
December 2 : Hector
December 2 : Arron
December 3 : Jen
December 3 : Zack
December 3 : Hector
December 3 : Arron
December 4 : Jen
December 4 : Zack
December 4 : Hector
December 4 : Arron
December 5 : Jen
December 5 : Zack
December 5 : Hector
December 5 : Arron
```
Please let me know how I could correct this function to work properly. | 2018/12/29 | [
"https://Stackoverflow.com/questions/53965764",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10844873/"
] | You used a nested for loop, therefore for every day - each of the names is printed along with that day. Use only the outer loop, and calculate who's turn it is. should be something like:
```
for day in range(1, days):
print('%s %s : %s' % (month, day, dish_list[day % len(dish_list)]))
```
assuming your roomates & you split the dishing equally. | You can loop through both lists together and repeating the shorter with `itertools.cycle`:
```
import itertools
for day, person in zip(range(1, days), itertools.cycle(dish_list)):
print('{} {} : {}'.format(month, day, person))
```
Update:
`zip` will pair elements in the two iterables--`range` object of days and `dish_list`--to create a new list of tuple pairs from the two iterables. However, `zip` only creates a list up to the shortest iterable. `itertools.cycle` circumvents this problem so `zip` cycles back to through `dish_list`. The for loop will now cycle through these two together, rather than in a nested fashion in your original code.
Documentation will probably explain better than I just did: [`zip`](https://docs.python.org/3/library/functions.html#zip), [`itertools.cycle`](https://docs.python.org/3/library/itertools.html#itertools.cycle) | 674 |
44,861,989 | I have an xlsx file, with columns with various coloring.
I want to read only the white columns of this excel in python using pandas, but I have no clues on hot to do this.
I am able to read the full excel into a dataframe, but then I miss the information about the coloring of the columns and I don't know which columns to remove and which not. | 2017/07/01 | [
"https://Stackoverflow.com/questions/44861989",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4402942/"
] | **(Disclosure: I'm one of the authors of the library I'm going to suggest)**
With [StyleFrame](https://github.com/DeepSpace2/StyleFrame) (that wraps pandas) you can read an excel file into a dataframe without loosing the style data.
Consider the following sheet:
[![enter image description here](https://i.stack.imgur.com/SQ96I.png)](https://i.stack.imgur.com/SQ96I.png)
And the following code:
```
from styleframe import StyleFrame, utils
# from StyleFrame import StyleFrame, utils (if using version < 3.X)
sf = StyleFrame.read_excel('test.xlsx', read_style=True)
print(sf)
# b p y
# 0 nan 3 1000.0
# 1 3.0 4 2.0
# 2 4.0 5 42902.72396767148
sf = sf[[col for col in sf.columns
if col.style.fill.fgColor.rgb in ('FFFFFFFF', utils.colors.white)]]
# "white" can be represented as 'FFFFFFFF' or
# '00FFFFFF' (which is what utils.colors.white is set to)
print(sf)
# b
# 0 nan
# 1 3.0
# 2 4.0
``` | This can not be done in pandas. You will need to use other library to read the xlsx file and determine what columns are white. I'd suggest using `openpyxl` library.
Then your script will follow this steps:
1. Open xlsx file
2. Read and filter the data (you can access the cell color) and save the results
3. Create pandas dataframe
Edit: Switched `xlrd` to `openpyxl` as `xlrd` is no longer actively maintained | 676 |
51,165,672 | When I execute the code below, is there anyway to keep python compiler running the code without error messages popping up?
Since I don't know how to differentiate integers and strings,
when `int(result)` executes and `result` contains letters, it spits out an error message that stops the program.
Is there anyway around this?
Here is my code:
```
result = input('Type in your number,type y when finished.\n')
int(result)
if isinstance(result,str):
print('finished')
``` | 2018/07/04 | [
"https://Stackoverflow.com/questions/51165672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10029884/"
] | Actually, with Python and many other languages, you can differentiate types.
When you execute `int(result)`, the `int` builtin assumes the parameter value is able to be turned into an integer. If not, say the string is `abc123`, it can not turn that string into an integer and will raise an exception.
An easy way around this is to check first with one of the many builtins `isdigit()`, before we evaluate `int(result)`.
```
# We assume result is always a string, and therefore always has the method `.isdigit`
if result.isdigit():
int(result)
else:
# Choose what happens if it is not of the correct type. Remove this statement if nothing.
pass
```
Note that `.isdigit()` will only work on whole numbers, `10.4` will be seen as *not* an integer. However `10` will be.
I recommend this approach over `try` and `except` clauses, however that is a valid solution too. | You can put everything that might throw an error, in a try block, and have an except block that keeps the flow of the program.
btw I think, in your code it should be, `isinstance(result,int)` not `isinstance(result,str)`
In your case,
```
result = input('Type in your number,type y when finished.\n')
try:
result = int(result)
except:
pass
if isinstance(result,int):
print('finished')
``` | 677 |
69,216,484 | Hello I'm trying to sort my microscpoy images.
I'm using python 3.7
File names' are like this. t0, t1, t2
```
S18_b0s17t0c0x62672-1792y6689-1024.tif
S18_b0s17t1c0x62672-1792y6689-1024.tif
S18_b0s17t2c0x62672-1792y6689-1024.tif
.
.
.
S18_b0s17t145c0x62672-1792y6689-1024
```
I tried "sorted" the list but it was like this
[![enter image description here](https://i.stack.imgur.com/SNJHw.png)](https://i.stack.imgur.com/SNJHw.png)
can some one give me some tips to sort out by the sequence | 2021/09/17 | [
"https://Stackoverflow.com/questions/69216484",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16075554/"
] | **Updated answer for your updated question:**
**The simple answer to your question is that you can use [string.Split](https://learn.microsoft.com/en-us/dotnet/api/system.string.split?view=net-5.0) to separate that string at the commas.** But the fact that you have to do this is indicative of a larger problem with your database schema.
Right now I'm inferring that your table looks something like this:
**ai**
| command | properties |
| --- | --- |
| command1 | property1,property2,property3,property4 |
| command2 | property1,property2 |
You should never put comma delimited values into a database. Try something like this:
**ai**
| command | property |
| --- | --- |
| command1 | property1 |
| command1 | property2 |
| command1 | property3 |
| command1 | property4 |
| command2 | property1 |
| command2 | property2 |
Your query becomes: `SELECT property FROM ai WHERE command = @command`
I would however like to add that even this improved schema is problematic. You don't want to duplicate strings and use them as id's. It's prone to typos and problems when renaming. Instead do something like this:
**command**
| id (int) | name (varchar) |
| --- | --- |
| 1 | command1 |
| 2 | command2 |
**property**
| id (int) | name (varchar) |
| --- | --- |
| 1 | property1 |
| 2 | property2 |
| 3 | property3 |
| 4 | property4 |
**commandproperty**
| commandID (int) | propertyID (int) |
| --- | --- |
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 1 | 4 |
| 2 | 1 |
| 2 | 2 |
Your query roughly becomes: `SELECT command.name as command, property.name as property from commandProperty LEFT JOIN command ON command.id = commandID LEFT JOIN property ON property.id = propertyID WHERE commandID = (SELECT TOP 1 id ROM command WHERE name = @command)`
There might be typos in that query. I haven't actually executed it. Also, it would be best practice to turn these tables into a view that looks like my second example.
**My old answer:**
There seems to be something missing in the question.
Is the problem that the array can not be expanded beyond `property4`? If so try using a `List<string>`.
Is the proplem that you want to associate column values with column names? In that case try using a [`Dictionary<string,object>`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.dictionary-2?view=net-5.0) (or `Dictionary<string,T>` where `T` is a datatype common to all the columns).
Alternatively, you can try using C#'s built in [`DataTable`](https://learn.microsoft.com/en-us/dotnet/api/system.data.datatable?view=net-5.0). I find them to be a bit verbose to use, but they will probably work for your needs. | I'm editing the answer based on the new information.
I'd still consider using my Dapper wrapper package.
<https://www.nuget.org/packages/Cworth.DapperExtensions/#>
Create a model class that matches the filed returned in your select.
```
public class MyModel
{
public string Command { get; set; }
public string properties { get; set; }
}
```
Use nugget package manager to install package referenced above.
Update your data access class to add using statement;
`using Cworth.DapperExtensions;`
Update your method
```
public async static string[] SelectData(string data)
{
var sqlRepo = new SqlRepo(_connectionString);
var results = await sqlRepo.GetList<MyModel>("MyStoredProc", new { command = data });
return results.Select(r => r.Properties).ToArray();
}
```
Note the above assumes you have created a stored procedure in SQL name "MyStoredProc" that that match your select with parameter "command". | 680 |
59,126,742 | i am playing with wxPython and try to set position of frame:
```
import wx
app = wx.App()
p = wx.Point(200, 200)
frame = wx.Frame(None, title = 'test position', pos = p)
frame.Show(True)
print('frame position: ', frame.GetPosition())
app.MainLoop()
```
even though `print('frame position: ', frame.GetPosition())` shows the correct postion, the frame is shown in top left corner of screen.
Alternatively i tried
```
frame.SetPosition(p)
frame.Move(p)
```
without success.
my environment: ArchLinux 5.3.13, python 3.8.0, wxpython 4.0.7, openbox 3.6.1
On cinnamom the code works as expected. How to solve this on openbox?
edit 07,12,2019:
i could set postion of a dialog in openbox config `~/.config/openbox/rc.xml`:
```
<application name="fahrplan.py"
class="Fahrplan.py"
groupname="fahrplan.py"
groupclass="Fahrplan.py"
title="Fahrplan *"
type="dialog">
<position force="no">
<x>760</x>
<y>415</y>
</position>
</application>
```
i got name, class etc. from obxprop. x and y are calculated to center a dialog of 400 x 250 px on screen of 1920 x 1080 px.
This static solution is not suitable for me. I want to place dynamically generated popups. | 2019/12/01 | [
"https://Stackoverflow.com/questions/59126742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3455890/"
] | I had the same problem under Windows and played around with the style flags. With wxICONIZE sytle set active the window finally used the positioning information | The position is provided to the window manager as a "hint". It is totally up to the window manager whether it will actually honor the hint or not. Check the openbox settings or preferences and see if there is anything relevant that can be changed. | 681 |
56,451,482 | Within my main window I have a table of class QTreeView. The second column contains subjects of mails. With a click of a push button I want to search for a specific character, let's say "Y". Now I want the table to jump to the first found subject beginning with the letter "Y".
See the following example.
[![enter image description here](https://i.stack.imgur.com/rUtxL.png)](https://i.stack.imgur.com/rUtxL.png)
When you pick any cell in the second column ("subject") and start typing "y" this will work -> the table highlights the first occurrence. -> See the underlined item "Your Phone Bill". It would even scroll to that cell when it would be out of sight.
[![enter image description here](https://i.stack.imgur.com/BVVLB.png)](https://i.stack.imgur.com/BVVLB.png)
I want exactly this - but implemented on a push button, see "Search Subj 'Y'", signal "on\_pbSearch\_Y\_clicked()".
Full functional code (so far):
```
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
from PyQt5.QtGui import *
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
class App(QWidget):
MAIL_RANGE = 4
ID, FROM, SUBJECT, DATE = range(MAIL_RANGE)
def __init__(self):
super().__init__()
self.left = 10
self.top = 10
self.width = 640
self.height = 240
self.initUI()
self.dataView.setSelectionMode(QAbstractItemView.ExtendedSelection) # <- enable selection of rows in tree
self.dataView.setEditTriggers(QAbstractItemView.NoEditTriggers) # <- disable editing items in tree
for i in range(0, 2):
self.dataView.resizeColumnToContents(i)
self.pbSearch_Y = QPushButton(self)
self.pbSearch_Y.setText("Search Subj 'Y'")
self.pbSearch_Y.move(500,0)
self.pbSearch_Y.show()
# connect handlers
self.pbSearch_Y.clicked.connect(self.on_pbSearch_Y_clicked)
def on_pbSearch_Y_clicked(self):
pass
def initUI(self):
self.setGeometry(self.left, self.top, self.width, self.height)
self.dataGroupBox = QGroupBox("Inbox")
self.dataView = QTreeView()
self.dataView.setRootIsDecorated(False)
self.dataView.setAlternatingRowColors(True)
dataLayout = QHBoxLayout()
dataLayout.addWidget(self.dataView)
self.dataGroupBox.setLayout(dataLayout)
model = self.createMailModel(self)
self.dataView.setModel(model)
self.addMail(model, 1, 'service@github.com', 'Your Github Donation','03/25/2017 02:05 PM')
self.addMail(model, 2, 'support@github.com', 'Github Projects','02/02/2017 03:05 PM')
self.addMail(model, 3, 'service@phone.com', 'Your Phone Bill','01/01/2017 04:05 PM')
self.addMail(model, 4, 'service@abc.com', 'aaaYour Github Donation','03/25/2017 02:05 PM')
self.addMail(model, 5, 'support@def.com', 'bbbGithub Projects','02/02/2017 03:05 PM')
self.addMail(model, 6, 'service@xyz.com', 'cccYour Phone Bill','01/01/2017 04:05 PM')
self.dataView.setColumnHidden(0, True)
mainLayout = QVBoxLayout()
mainLayout.addWidget(self.dataGroupBox)
self.setLayout(mainLayout)
self.show()
def createMailModel(self,parent):
model = QStandardItemModel(0, self.MAIL_RANGE, parent)
model.setHeaderData(self.ID, Qt.Horizontal, "ID")
model.setHeaderData(self.FROM, Qt.Horizontal, "From")
model.setHeaderData(self.SUBJECT, Qt.Horizontal, "Subject")
model.setHeaderData(self.DATE, Qt.Horizontal, "Date")
return model
def addMail(self, model, mailID, mailFrom, subject, date):
model.insertRow(0)
model.setData(model.index(0, self.ID), mailID)
model.setData(model.index(0, self.FROM), mailFrom)
model.setData(model.index(0, self.SUBJECT), subject)
model.setData(model.index(0, self.DATE), date)
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = App()
sys.exit(app.exec_())
```
How can I achieve this? | 2019/06/04 | [
"https://Stackoverflow.com/questions/56451482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10598535/"
] | You have to do the following:
* Use the [`match()`](https://doc.qt.io/qt-5/qabstractitemmodel.html#match) method of view to find the QModelIndex given the text.
* Use the [`scrollTo()`](https://doc.qt.io/qt-5/qabstractitemview.html#scrollTo) method of view to scroll to QModelIndex
* Use the [`select()`](https://doc.qt.io/qt-5/qitemselectionmodel.html#select-2) method of the view's [`selectionModel()`](https://doc.qt.io/qt-5/qabstractitemview.html#selectionModel) to select the row.
```py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
from PyQt5 import QtCore, QtGui, QtWidgets
class App(QtWidgets.QWidget):
MAIL_RANGE = 4
ID, FROM, SUBJECT, DATE = range(MAIL_RANGE)
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.setGeometry(10, 10, 640, 240)
self.dataGroupBox = QtWidgets.QGroupBox("Inbox")
self.dataView = QtWidgets.QTreeView(
rootIsDecorated=False,
alternatingRowColors=True,
selectionMode=QtWidgets.QAbstractItemView.ExtendedSelection,
editTriggers=QtWidgets.QAbstractItemView.NoEditTriggers,
selectionBehavior=QtWidgets.QAbstractItemView.SelectRows,
)
dataLayout = QtWidgets.QHBoxLayout()
dataLayout.addWidget(self.dataView)
self.dataGroupBox.setLayout(dataLayout)
model = App.createMailModel(self)
self.dataView.setModel(model)
for i in range(0, 2):
self.dataView.resizeColumnToContents(i)
self.addMail(model, 1, 'service@github.com', 'Your Github Donation','03/25/2017 02:05 PM')
self.addMail(model, 2, 'support@github.com', 'Github Projects','02/02/2017 03:05 PM')
self.addMail(model, 3, 'service@phone.com', 'Your Phone Bill','01/01/2017 04:05 PM')
self.addMail(model, 4, 'service@abc.com', 'aaaYour Github Donation','03/25/2017 02:05 PM')
self.addMail(model, 5, 'support@def.com', 'bbbGithub Projects','02/02/2017 03:05 PM')
self.addMail(model, 6, 'service@xyz.com', 'cccYour Phone Bill','01/01/2017 04:05 PM')
self.dataView.setColumnHidden(0, True)
self.leSearch = QtWidgets.QLineEdit()
self.pbSearch = QtWidgets.QPushButton(
"Search", clicked=self.on_pbSearch_clicked
)
hlay = QtWidgets.QHBoxLayout()
hlay.addWidget(self.leSearch)
hlay.addWidget(self.pbSearch)
mainLayout = QtWidgets.QVBoxLayout(self)
mainLayout.addLayout(hlay)
mainLayout.addWidget(self.dataGroupBox)
@staticmethod
def createMailModel(parent):
model = QtGui.QStandardItemModel(0, App.MAIL_RANGE, parent)
for c, text in zip(
(App.ID, App.FROM, App.SUBJECT, App.DATE),
("ID", "From", "Subject", "Date"),
):
model.setHeaderData(c, QtCore.Qt.Horizontal, text)
return model
def addMail(self, model, mailID, mailFrom, subject, date):
model.insertRow(0)
for c, text in zip(
(App.ID, App.FROM, App.SUBJECT, App.DATE),
(mailID, mailFrom, subject, date),
):
model.setData(model.index(0, c), text)
@QtCore.pyqtSlot()
def on_pbSearch_clicked(self):
text = self.leSearch.text()
self.leSearch.clear()
if text:
# find index
start = self.dataView.model().index(0, 2)
ixs = self.dataView.model().match(
start,
QtCore.Qt.DisplayRole,
text,
hits=1,
flags=QtCore.Qt.MatchStartsWith,
)
if ixs:
ix = ixs[0]
# scroll to index
self.dataView.scrollTo(ix)
# select row
ix_from = ix.sibling(ix.row(), 0)
ix_to = ix.sibling(
ix.row(), self.dataView.model().columnCount() - 1
)
self.dataView.selectionModel().select(
QtCore.QItemSelection(ix_from, ix_to),
QtCore.QItemSelectionModel.SelectCurrent,
)
else:
self.dataView.clearSelection()
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
ex = App()
ex.show()
sys.exit(app.exec_())
``` | I'll be honnest, I don't use GUI with python but here is how you could do by replacing my arbitrary functions by the needed ones with PyQT
```py
mostWantedChar = 'Y'
foundElements = []
for element in dataView.listElements():
if element[0] == mostWantedChar:
foundElements.Append(element + '@' + element.Line()) #In case you would need to get the line and the line's content for further purposes (just make a split('@') )
element.Line().Higlight()
waitClickFromPushButton()
return foundElements
``` | 682 |
4,089,843 | I'm looking to implement a SOAP web service in python on top of IIS. Is there a recommended library that would take a given Python class and expose its functions as web methods? It would be great if said library would also auto-generate a WSDL file based on the interface. | 2010/11/03 | [
"https://Stackoverflow.com/questions/4089843",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11208/"
] | There is an article by Doug Hellmann that evaluates various SOAP Tools
* <http://doughellmann.com/2009/09/01/evaluating-tools-for-developing-with-soap-in-python.html>
Other ref:
* <http://wiki.python.org/moin/WebServices>
* <http://pywebsvcs.sourceforge.net/> | Take a look at SOAPpy (<http://pywebsvcs.sourceforge.net/>). It allows you to expose your functions as web methods, but you have to add a line of code (manually) to register your function with the exposed web service. It is fairly easy to do. Also, it doesn't auto generate wsdl for you.
Here's an example of how to create your web service, and expose a function:
```
server = SOAPpy.SOAPServer(("", 8080))
server.registerFunction(self.hello)
``` | 683 |
11,387,575 | The [python sample source code](https://developers.google.com/drive/examples/python#complete_source_code) goes thru the details of authentication/etc. I am looking for a simple upload to the Google Drive folder that has public writable permissions. (Plan to implement authorization at a later point).
I want to replace the below code to upload file to Google Drive folder instead.
```
f = open('output.txt')
for line in allLines:
f.write (line)
f.close()
```
(If it makes any difference, I plan to run this thru Google App Engine).
Thanks. | 2012/07/08 | [
"https://Stackoverflow.com/questions/11387575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1055761/"
] | You can't. All requests to the Drive API need authentication (source: <http://developers.google.com/drive/about_auth>) | As Wooble said, you cannot do this without authentication. You can use this service + file-upload widget to let your website visitors upload files to your Google Drive folder: <https://github.com/cloudwok/file-upload-embed/> | 685 |
32,341,972 | I'm creating a small python program that iterates through a folder structure and performs a task on every audio file that it finds.
I need to identify which files are audio and which are 'other' (e.g. jpegs of the album cover) that I want the process to ignore and just move onto the next file.
From searching on StackOverflow/Google/etc the sndhdr module appears at the top of most lists - I can't seem to get the sndhdr.what() method to return anything but 'None' though, no matter how many \*.mp3 files I throw at it. My outline implementation is below, can anyone tell me what I'm doing wrong?
```
def import_folder(folder_path):
''' Imports all audio files found in a folder structure
:param folder_path: The absolute path of the folder
:return: True/False depending on whether the process was successful
'''
# Remove any spaces to ensure the folder is located correctly
folder_path = folder_path.strip()
for subdir, dirs, files in os.walk(folder_path):
for file in files:
audio_file = os.path.join(subdir, file)
print sndhdr.what(audio_file)
# The 'real' method will perform the task here
```
For example:
```
rootdir = '/home/user/FolderFullOfmp3Files'
import_folder(rootdir)
>>> None
>>> None
>>> None
...etc
``` | 2015/09/01 | [
"https://Stackoverflow.com/questions/32341972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1701514/"
] | This is likely happening because you are drawing your screenshot in your `Activity#onCreate()`. At this point, your View has not measured its dimensions, so `View#getDrawingCache()` will return null because width and height of the view will be 0.
You can move your screenshot code away from `onCreate()` or you could use a `ViewTreeObserver.OnGlobalLayoutListener` to listen for when the view is about to be drawn.
Only after `View#getWidth()` returns a non-zero integer can you get your screenshot. | got the solution from @ugo's suggestion
put this in a your onCreate function
```
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate( savedInstanceState );
setContentView( R.layout.activity_share );
//
...
///
myLayout.getViewTreeObserver().addOnGlobalLayoutListener(new ViewTreeObserver.OnGlobalLayoutListener() {
@Override
public void onGlobalLayout() {
//take a screenshot
screenShot();
}}
``` | 686 |
60,410,173 | I have a pip requirements file that includes specific cpu-only versions of torch and torchvision. I can use the following pip command to successfully install my requirements.
```bash
pip install --requirement azure-pipelines-requirements.txt --find-links https://download.pytorch.org/whl/torch_stable.html
```
My requirements file looks like this
```none
coverage
dataclasses
joblib
matplotlib
mypy
numpy
pandas
param
pylint
pyro-ppl==1.2.1
pyyaml
scikit-learn
scipy
seaborn
torch==1.4.0+cpu
torchvision==0.5.0+cpu
visdom
```
This works from bash, but how do I invoke pip with the `find-links` option from inside a conda environment yaml file? My current attempt looks like this
```yaml
name: build
dependencies:
- python=3.6
- pip
- pip:
- --requirement azure-pipelines-requirements.txt --find-links https://download.pytorch.org/whl/torch_stable.html
```
But when I invoke
```bash
conda env create --file azure-pipeline-environment.yml
```
I get this error.
>
> Pip subprocess error:
>
> ERROR: Could not find a version that satisfies the requirement torch==1.4.0+cpu (from -r E:\Users\tim\Source\Talia\azure-pipelines-requirements.txt (line 25)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
>
> ERROR: No matching distribution found for torch==1.4.0+cpu (from -r E:\Users\tim\Source\Talia\azure-pipelines-requirements.txt (line 25))
>
>
> CondaEnvException: Pip failed
>
>
>
How do I specify the `find-links` option when invoking pip from a conda environment yaml file? | 2020/02/26 | [
"https://Stackoverflow.com/questions/60410173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/575530/"
] | [This example](https://github.com/conda/conda/blob/54e4a91d0da4d659a67e3097040764d3a2f6aa16/tests/conda_env/support/advanced-pip/environment.yml) shows how to specify options for pip
Specify the global pip option first:
```
name: build
dependencies:
- python=3.6
- pip
- pip:
- --find-links https://download.pytorch.org/whl/torch_stable.html
- --requirement azure-pipelines-requirements.txt
``` | Found the answer in the pip documentation [here](https://pip.pypa.io/en/stable/reference/pip_install/#requirement-specifiers). I can add the `find-links` option to my requirements file, so my conda environment yaml file becomes
```yaml
name: build
dependencies:
- python=3.6
- pip
- pip:
- --requirement azure-pipelines-requirements.txt
```
and my pip requirements file becomes
```none
--find-links https://download.pytorch.org/whl/torch_stable.html
coverage
dataclasses
joblib
matplotlib
mypy
numpy
pandas
param
pylint
pyro-ppl==1.2.1
pyyaml
scikit-learn
scipy
seaborn
torch==1.4.0+cpu
torchvision==0.5.0+cpu
visdom
``` | 687 |
45,894,208 | I'm using Spyder to do some small projects with Keras, and every now and then (I haven't pinned down what it is in the code that makes it appear) I get this message:
```
File "~/.local/lib/python3.5/site-packages/google/protobuf/descriptor_pb2.py", line 1771, in <module>
__module__ = 'google.protobuf.descriptor_pb2'
TypeError: A Message class can only inherit from Message
```
Weirdly, this exception is not raised if I execute the program outside of Spyder, using the terminal. I've looked around and I have found no one who has encountered this error while using Keras.
Restarting Spyder makes it go away, but it's frustrating. What could be causing it? | 2017/08/26 | [
"https://Stackoverflow.com/questions/45894208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1718331/"
] | Ok, I found the cause: interrupting the execution before Keras fully loads.
As said before restarting Spyder (or just the console) solves it. | I had the same problem with Spyder, which happened when it was trying to reload modules that were already loaded. I solved it by disabling the UMR (User Module Reloader) option in "preferences -> python interpreter" . | 688 |
65,266,224 | I'm new to python so please kindly help, I don't know much.
I'm working on a project which asks for a command, if the command is = to "help" then it will say how to use the program. I can't seem to do this, every time I try to use the if statement, it still prints the help section wether the command exists or not.
**example: someone enters a command that doesn't exist on the script, it still prints the help section.**
```
print("welcome, to use this, please input the options below")
print ("help | exit")
option = input("what option would you like to use? ")
if help:
print("this is a test, there will be an actual help section soon.")
else:
print("no such command")
``` | 2020/12/12 | [
"https://Stackoverflow.com/questions/65266224",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14759499/"
] | If you are using gnu-efi, use `uefi_call_wrapper()` to call UEFI functions.
```c
RT->GetTime(time, NULL); // Program hangs
uefi_call_wrapper(RT->GetTime, 2, time, NULL); // Okay
```
The reason is the different calling convention between UEFI (which uses Microsoft x64 calling convention) and Linux (which uses System V amd64 ABI). By default, gcc will generate the code in Linux format, so we need to explicitly tell it to generate it in UEFI format.
You can see the difference by peforming an `objdump`. | I think you missed to initialize RT.
```
RT = SystemTable->RuntimeServices;
```
Your code is very similar to one of the examples (the one at section 4.7.1) of the Unified Extensible Firmware Interface Specification 2.6. I doubth you haven't read it, but just in case.
<https://www.uefi.org/sites/default/files/resources/UEFI%20Spec%202_6.pdf> | 691 |
25,310,746 | I've large set of images. I wan't to chage their background to specific color. Lets say green. All of the images have transparent background. Is there a way to perform this action using python-fu scripting in Gimp. Or some other tool available to do this specific task in automated fashion. | 2014/08/14 | [
"https://Stackoverflow.com/questions/25310746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/811502/"
] | The fact is that when you query a model (via a QuerySet method, or indirectly via a ForeignKey) **you get non-polymorphic instances** - in contrast to SQLAlchemy, where you get polymorphic instances.
This is because the fetched data corresponds only to the data you're accessing (and it's ancestors since they are known beforehand). By default, Django does not do any kind of `select_related` to get the children, so you're stuck with the base (i.e. current) class model of the foreign key or query set.
This means:
```
Vehicle.objects.get(pk=1).__class__ == Vehicle
```
will be always True, and:
```
Surprise.objects.get(pk=1).items.all()[0].__class__ == Vehicle
```
will be always True as well.
(**assume** for these examples that vehicle with pk=1 exists, surprise with pk=1 exists, and has at least one item)
There's no clean solution for this EXCEPT by knowing your children classes. As you said: accessing variables like .car or .truck (considering classes Car and Truck exist) is the way. **However** if you hit the wrong children class (e.g. you hit `vehicle.car` when `vehicle` should be, actually, a `Truck` instance) you will get an `ObjectDoesNotExist` error. **Disclaimer**: Don't know what would happen if you have two children classes with the same name in different modules.
If you want to have **polymorphic** behavior, which can abstract you from testing every possible subclass, an application exists (haven't actually used it): <https://django-polymorphic.readthedocs.org/en/latest/> | According to Django Documentation:
`If you have a Place that is also a Restaurant, you can get from the Place object to the Restaurant object by using the lower-case version of the model name:`
```
p = Place.objects.get(id=12)
p.restaurant
```
Further to that:
>
> **However, if p in the above example was not a Restaurant (it had been created directly as a Place object or was the parent of some other class), referring to p.restaurant would raise a Restaurant.DoesNotExist exception.**
>
>
>
So you answered the question on your own, you need to check the car attr, because that is what is pointing to the model you are looking for, if there is no car attr then the object was not created by the Car class. | 692 |
5,627,954 | A simple program for reading a CSV file inside a ZIP archive:
```py
import csv, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
for row in csv.DictReader(items_file):
pass
```
works in Python 2.7:
```none
$ python2.7 test_zip_file_py3k.py ~/data.zip
$
```
but not in Python 3.2:
```none
$ python3.2 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 8, in <module>
for row in csv.DictReader(items_file):
File "/somedir/python3.2/csv.py", line 109, in __next__
self.fieldnames
File "/somedir/python3.2/csv.py", line 96, in fieldnames
self._fieldnames = next(self.reader)
_csv.Error: iterator should return strings, not bytes (did you open the file
in text mode?)
```
The `csv` module in Python 3 wants to see a text file, but `zipfile.ZipFile.open` returns a `zipfile.ZipExtFile` that is always treated as binary data.
How does one make this work in Python 3? | 2011/04/11 | [
"https://Stackoverflow.com/questions/5627954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638434/"
] | You can wrap it in a [io.TextIOWrapper](http://docs.python.org/library/io.html#io.TextIOWrapper).
```
items_file = io.TextIOWrapper(items_file, encoding='your-encoding', newline='')
```
Should work. | [Lennart's answer](https://stackoverflow.com/questions/5627954/py3k-how-do-you-read-a-file-inside-a-zip-file-as-text-not-bytes/5631786#5631786) is on the right track (Thanks, Lennart, I voted up your answer) and it **almost** works:
```
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
items_file = io.TextIOWrapper(items_file, encoding='iso-8859-1', newline='')
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0}'.format(idx))
$ python3.1 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 7, in <module>
items_file = io.TextIOWrapper(items_file,
encoding='iso-8859-1',
newline='')
AttributeError: readable
```
The problem appears to be that [io.TextWrapper](http://docs.python.org/library/io.html#io.TextIOWrapper)'s first required parameter is a **buffer**; not a file object.
This appears to work:
```
items_file = io.TextIOWrapper(io.BytesIO(items_file.read()))
```
This seems a little complex and also it seems annoying to have to read in a whole (perhaps huge) zip file into memory. Any better way?
Here it is in action:
```
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
items_file = io.TextIOWrapper(io.BytesIO(items_file.read()))
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0}'.format(idx))
$ python3.1 test_zip_file_py3k.py ~/data.zip
Processing row 0
Processing row 1
Processing row 2
...
Processing row 250
``` | 694 |
55,633,118 | I would like to create an application running from CLI in windows like the awscli program.
It should be built with python script and when running that it showld perform soma action like
```
samplepgm login -u akhil -p raju
```
Like this, Could you guide me in creating this kind of cli application in Windows with python. | 2019/04/11 | [
"https://Stackoverflow.com/questions/55633118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1556933/"
] | Check out `argparse` for something basic:
<https://docs.python.org/3/library/argparse.html>
For a better library check out `Click`:
<https://click.palletsprojects.com/en/7.x/>
Others:
* <https://pypi.org/project/argh/>
* <http://docopt.org/> | I have implemented this using [Pyinstaller](https://pyinstaller.readthedocs.io/en/stable/) which will build an exe file of the python files. Will work with all versions of Python
First you need to create your python cli script for the task, then build the exe using
`pyinstaller --onefile -c -F -n Cli-latest action.py`
If you need to get inputs from users then you can do a infinite loop which will request for `input`
(Try `input("My Dev:-$ "`) | 703 |
54,435,024 | I have a 50 years data. I need to choose the combination of 30 years out of it such that the values corresponding to them reach a particular threshold value but the possible number of combination for `50C30` is coming out to be `47129212243960`.
How to calculate it efficiently?
```
Prs_100
Yrs
2012 425.189729
2013 256.382494
2014 363.309507
2015 578.728535
2016 309.311562
2017 476.388839
2018 441.479570
2019 342.267756
2020 388.133403
2021 405.007245
2022 316.108551
2023 392.193322
2024 296.545395
2025 467.388190
2026 644.588971
2027 301.086631
2028 478.492618
2029 435.868944
2030 467.464995
2031 323.465049
2032 391.201598
2033 548.911349
2034 381.252838
2035 451.175339
2036 281.921215
2037 403.840004
2038 460.514250
2039 409.134409
2040 312.182576
2041 320.246886
2042 290.163454
2043 381.432168
2044 259.228592
2045 393.841815
2046 342.999972
2047 337.491898
2048 486.139010
2049 318.278012
2050 385.919542
2051 309.472316
2052 307.756455
2053 338.596315
2054 322.508536
2055 385.428138
2056 339.379743
2057 420.428529
2058 417.143175
2059 361.643381
2060 459.861622
2061 374.359335
```
I need only that 30 years combination whose `Prs_100` mean value reaches upto a certain threshold , I can then break from calculating further outcomes.On searching SO , I found a particular approach using an `apriori` algorithm but couldn't really figure out the values of support in it.
I have used the combinations method of python
```
list(combinations(dftest.index,30))
```
but it was not working in this case.
Expected Outcome-
Let's say I found a 30 years set whose `Prs_100` mean value is more than 460 , then I'll save that 30 years output as a result and it will be my desired outcome too.
How to do it ? | 2019/01/30 | [
"https://Stackoverflow.com/questions/54435024",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5617580/"
] | This is `vendor.js` is working fine:
```
require('datatables.net');
require('datatables.net-bs4');
window.JSZip = require('jszip');
require('datatables.net-buttons');
require('datatables.net-buttons/js/buttons.flash.js');
require('datatables.net-buttons/js/buttons.html5.js');
``` | If you are not using Bootstrap, you should use this:
```
var table = $('#example').DataTable( {
buttons: [
'copy', 'excel', 'pdf'
] } );
table.buttons().container()
.appendTo( $('<#elementWhereYouNeddToShowThem>', table.table().container() ) );
``` | 704 |
53,529,807 | I am trying to balance my dataset, But I am struggling in finding the right way to do it. Let me set the problem. I have a multiclass dataset with the following class weights:
```
class weight
2.0 0.700578
4.0 0.163401
3.0 0.126727
1.0 0.009294
```
As you can see the dataset is pretty unbalanced. What I would like to do is to obtain a balanced dataset in which each class is represented with the same weight.
There are a lot of questions regarding but:
* [Scikit-learn balanced subsampling](https://stackoverflow.com/questions/23455728/scikit-learn-balanced-subsampling): this subsamples can be overlapping, which for my approach is wrong. Moreover, I would like to get that using sklearn or packages that are well tested.
* [How to perform undersampling (the right way) with python scikit-learn?](https://stackoverflow.com/questions/34831676/how-to-perform-undersampling-the-right-way-with-python-scikit-learn): here they suggest to use an unbalanced dataset with a balance class weight vector, however, I need to have this balance dataset, is not a matter of which model and which weights.
* <https://github.com/scikit-learn-contrib/imbalanced-learn>: a lot of question refers to this package. Below an example on how I am trying to use it.
Here the example:
```
from imblearn.ensemble import EasyEnsembleClassifier
eec = EasyEnsembleClassifier(random_state=42, sampling_strategy='not minority', n_estimators=2)
eec.fit(data_for, label_all.loc[data_for.index,'LABEL_O_majority'])
new_data = eec.estimators_samples_
```
However, the returned indexes are all the indexes of the initial data and they are repeated `n_estimators` times.
Here the result:
```
[array([ 0, 1, 2, ..., 1196, 1197, 1198]),
array([ 0, 1, 2, ..., 1196, 1197, 1198])]
```
Finally, a lot of techniques use oversampling but would like to not use them. Only for class `1` I can tolerate oversampling, as it is very predictable.
I am wondering if really sklearn, or this contrib package do not have a function that does this. | 2018/11/28 | [
"https://Stackoverflow.com/questions/53529807",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6394941/"
] | Try something like this β¦
```
### Exporting SQL Server table to JSON
Clear-Host
#--Establishing connection to SQL Server --#
$InstanceName = "."
$connectionString = "Server=$InstanceName;Database=msdb;Integrated Security=True;"
#--Main Query --#
$query = "SELECT * FROM sysjobs"
$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()
$command = $connection.CreateCommand()
$command.CommandText = $query
$result = $command.ExecuteReader()
$table = new-object "System.Data.DataTable"
$table.Load($result)
#--Exporting data to the screen --#
$table | select $table.Columns.ColumnName | ConvertTo-Json
$connection.Close()
# Results
{
"job_id": "5126aca3-1003-481c-ab36-60b45a7ee757",
"originating_server_id": 0,
"name": "syspolicy_purge_history",
"enabled": 1,
"description": "No description available.",
"start_step_id": 1,
"category_id": 0,
"owner_sid": [
1
],
"notify_level_eventlog": 0,
"notify_level_email": 0,
"notify_level_netsend": 0,
"notify_level_page": 0,
"notify_email_operator_id": 0,
"notify_netsend_operator_id": 0,
"notify_page_operator_id": 0,
"delete_level": 0,
"date_created": "\/Date(1542859767703)\/",
"date_modified": "\/Date(1542859767870)\/",
"version_number": 5
}
``` | The "rub" here is that the SQL command `FOR JSON AUTO` even with execute scalar, will truncate JSON output, and outputting to a variable with `VARCHAR(max)` will still truncate. Using SQL 2016 LocalDB bundled with Visual Studio if that matters. | 705 |
62,537,194 | I am trying to solve a problem in HackerRank and stuck in this.
Help me to write python code for this question
Mr. Vincent works in a door mat manufacturing company. One day, he designed a new door mat with the following specifications:
Mat size must be X. ( is an odd natural number, and is times .)
The design should have 'WELCOME' written in the center.
The design pattern should only use |, . and - characters.
Sample Designs
```
Size: 7 x 21
---------.|.---------
------.|..|..|.------
---.|..|..|..|..|.---
-------WELCOME-------
---.|..|..|..|..|.---
------.|..|..|.------
---------.|.---------
``` | 2020/06/23 | [
"https://Stackoverflow.com/questions/62537194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13766802/"
] | Just a Simplified Version--
```
n,m = input().split()
n = int(n)
m = int(m)
#printing first half
for i in range(n//2):
t = int((2*i)+1)
print(('.|.'*t).center(m, '-'))
#printing middle line
print('WELCOME'.center(m,'-'))
#printing last half
for i in reversed(range(n//2)):
t = int((2*i)+1)
print(('.|.'*t).center(m, '-'))
``` | I was curious to look if there was a better solution to this hackerrank problem than mine so I landed up here.
### My solution with a single `for` loop which successfully passed all the test cases:
```
# Enter your code here. Read input from STDIN. Print output to
if __name__ == "__main__":
row_num, column_num = map(int, input().split())
num_list = []
# take user input N and M as stated in the problem: https://www.hackerrank.com/challenges/designer-door-mat
# where N are the number of rows and M are the number of coulmns
# For this explanation we assume N(row_num)=7, M(column_num)=21 for simplicity,
# the code below works on all the test cases provided by hackerrank and has cleared the submission
# this for loop is for iterating through the number of rows: 0th row to (n-1)th row
for i in range(0, row_num):
# steps to be done at each row
if i < row_num//2:
# BEFORE REACHING THE MIDDLE OF THE DOOR MAT
# we need to generate a pattern of ".|." aligned with '-' in the following manner:
# steps below will generate ".|." pattern 1, 3, 5 times
# On i=0, times = 2*(0+1)-1 = 1 => ---------.|.---------
# On i=1, times = 2*(1+1)-1 = 3 => ------.|..|..|.------
# On i=2, times = 2*(2+1)-1 = 5 => ---.|..|..|..|..|.---
times = 2*(i+1)-1
# record these numbers since we need to do the reverse when we reach the middle of the "door mat"
num_list.append(times)
# recall the assignment on Text Alignment: https://www.hackerrank.com/challenges/text-alignment/problem
# since that idea will be used below - at least this is how I look.
# Essentially, this part of code helps to eliminate the need of a for loop for iterating through each columns to generate '-'
# which would otherwise have to be used to print '-' surrounding the pattern .|. in each row
# instead we can use the column number "M" from the user input to determine how much alignment is to be done
print(('.|.'*times).center(column_num, '-'))
elif i == (row_num//2):
# UPON REACHING EXACTLY IN THE MIDDLE OF THE DOOR MAT
# once middle of the row is reached, all we need to print is "WELCOME" aligned with '-' : -------WELCOME-------
print('WELCOME'.center(column_num, '-'))
else:
# AFTER CROSSING THE MIDDLE OF THE DOOR MAT
# as soon as we cross the middle of the row, we need to print the same pattern as done above but this time in reverse order
# thankfully we have already stored the generated numbers in a list which is num_list = [1, 3, 5]
# all we need to do is just fetch them one by one in reverse order
# which is what is done by: num_list[(row_num-1)-i]
# row_num = 7, i = 4, num_list[(7-1)-4] --> num_list[2] --> 5 => ---.|..|..|..|..|.---
# row_num = 7, i = 5, num_list[(7-1)-5] --> num_list[1] --> 3 => ------.|..|..|.------
# row_num = 7, i = 6, num_list[(7-1)-6] --> num_list[0] --> 1 => ---------.|.---------
print(('.|.'*num_list[(row_num-1)-i]).center(column_num, '-'))
# DONE!
``` | 707 |
73,726,556 | is there a way to launch a script running on python3 via a python2 script.
To explain briefly I need to start the python3 script when starting the python2 script.
Python3 script is a video stream server (using Flask) and have to run simultaneously from the python2 script (not python3 script first and then python2 script).
The ideal would be to get a function in the python2 script which "open a cmd window and write" it in : python3 script\_python3.py | 2022/09/15 | [
"https://Stackoverflow.com/questions/73726556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19869727/"
] | Simply you can use;
```
const requestedTime=document.querySelector(".entry-date")?.value;
```
"." uses for class names, if you have "example" class you should define it as ".example"
"?" called as Optional chaining that means if there is no object like that return "undefined" not an error
".value" uses for getting value (if there is one more object has same class it'll return error.) | There is no `getElementByClassName` method, only `getElementsByClassName`. So you would have to change your code to `.getElementsByClassName(...)[0]`.
```js
var children=document.getElementsByClassName("entry-date published")[0].textContent
console.log(children);
```
```html
<time class="updated" datetime="2022-09-14T00:54:04+05:30" itemprop="dateModified">14th September 2022</time>
<time class="entry-date published" datetime="2022-02-09T18:35:52+05:30" itemprop="datePublished">9th February 2022</time></span> <span class="byline">
``` | 717 |
17,846,964 | I used qt Designer to generate my code. I want to have my 5 text boxes to pass 5 arguments to a python function(the function is not in this code) when the run button is released. I'm not really sure how to do this, I'm very new to pyqt.
```
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
self.runText = ""
self.scriptText = ""
self.changeText = ""
MainWindow.setObjectName(_fromUtf8("MainWindow"))
MainWindow.resize(580, 200)
self.centralwidget = QtGui.QWidget(MainWindow)
self.centralwidget.setObjectName(_fromUtf8("centralwidget"))
self.Run = QtGui.QPushButton(self.centralwidget)
self.Run.setGeometry(QtCore.QRect(250, 150, 75, 23))
self.Run.setObjectName(_fromUtf8("Run"))
self.Script = QtGui.QLabel(self.centralwidget)
self.Script.setGeometry(QtCore.QRect(70, 10, 46, 13))
self.Script.setObjectName(_fromUtf8("Script"))
self.Hosts = QtGui.QLabel(self.centralwidget)
self.Hosts.setGeometry(QtCore.QRect(270, 10, 46, 13))
self.Hosts.setObjectName(_fromUtf8("Hosts"))
self.CHange = QtGui.QLabel(self.centralwidget)
self.CHange.setGeometry(QtCore.QRect(470, 10, 46, 13))
self.CHange.setObjectName(_fromUtf8("CHange"))
self.ScriptLine = QtGui.QLineEdit(self.centralwidget)
self.ScriptLine.setGeometry(QtCore.QRect(30, 30, 113, 20))
self.ScriptLine.setObjectName(_fromUtf8("ScriptLine"))
self.HostLine = QtGui.QLineEdit(self.centralwidget)
self.HostLine.setGeometry(QtCore.QRect(230, 30, 113, 20))
self.HostLine.setObjectName(_fromUtf8("HostLine"))
self.ChangeLine = QtGui.QLineEdit(self.centralwidget)
self.ChangeLine.setGeometry(QtCore.QRect(430, 30, 113, 20))
self.ChangeLine.setText(_fromUtf8(""))
self.ChangeLine.setObjectName(_fromUtf8("ChangeLine"))
self.Cla = QtGui.QLabel(self.centralwidget)
self.Cla.setGeometry(QtCore.QRect(260, 80, 211, 16))
self.Cla.setText(_fromUtf8(""))
self.Cla.setObjectName(_fromUtf8("Cla"))
self.Sla = QtGui.QLabel(self.centralwidget)
self.Sla.setGeometry(QtCore.QRect(260, 100, 211, 16))
self.Sla.setText(_fromUtf8(""))
self.Sla.setObjectName(_fromUtf8("Sla"))
self.Hla = QtGui.QLabel(self.centralwidget)
self.Hla.setGeometry(QtCore.QRect(260, 120, 201, 16))
self.Hla.setText(_fromUtf8(""))
self.Hla.setObjectName(_fromUtf8("Hla"))
self.Cla_2 = QtGui.QLabel(self.centralwidget)
self.Cla_2.setGeometry(QtCore.QRect(250, 60, 111, 16))
self.Cla_2.setObjectName(_fromUtf8("Cla_2"))
self.label = QtGui.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(210, 100, 46, 13))
self.label.setObjectName(_fromUtf8("label"))
self.label_2 = QtGui.QLabel(self.centralwidget)
self.label_2.setGeometry(QtCore.QRect(210, 120, 46, 13))
self.label_2.setObjectName(_fromUtf8("label_2"))
self.label_3 = QtGui.QLabel(self.centralwidget)
self.label_3.setGeometry(QtCore.QRect(200, 80, 46, 13))
self.label_3.setObjectName(_fromUtf8("label_3"))
self.lineEdit = QtGui.QLineEdit(self.centralwidget)
self.lineEdit.setGeometry(QtCore.QRect(30, 80, 113, 20))
self.lineEdit.setObjectName(_fromUtf8("lineEdit"))
self.lineEdit_2 = QtGui.QLineEdit(self.centralwidget)
self.lineEdit_2.setGeometry(QtCore.QRect(430, 80, 113, 20))
font = QtGui.QFont()
font.setFamily(_fromUtf8("Wingdings 2"))
font.setPointSize(1)
self.lineEdit_2.setFont(font)
self.lineEdit_2.setAutoFillBackground(False)
self.lineEdit_2.setObjectName(_fromUtf8("lineEdit_2"))
self.label_4 = QtGui.QLabel(self.centralwidget)
self.label_4.setGeometry(QtCore.QRect(60, 60, 81, 16))
self.label_4.setObjectName(_fromUtf8("label_4"))
self.label_5 = QtGui.QLabel(self.centralwidget)
self.label_5.setGeometry(QtCore.QRect(460, 60, 46, 13))
self.label_5.setObjectName(_fromUtf8("label_5"))
MainWindow.setCentralWidget(self.centralwidget)
self.retranslateUi(MainWindow)
QtCore.QObject.connect(self.ScriptLine, QtCore.SIGNAL(_fromUtf8("textChanged(QString)")), self.Sla.setText)
QtCore.QObject.connect(self.HostLine, QtCore.SIGNAL(_fromUtf8("textChanged(QString)")), self.Hla.setText)
QtCore.QObject.connect(self.ChangeLine, QtCore.SIGNAL(_fromUtf8("textChanged(QString)")), self.Cla.setText)
QtCore.QObject.connect(self.Run, QtCore.SIGNAL(_fromUtf8("released()")), self.ScriptLine.clear)
QtCore.QObject.connect(self.Run, QtCore.SIGNAL(_fromUtf8("released()")), self.HostLine.clear)
QtCore.QObject.connect(self.Run, QtCore.SIGNAL(_fromUtf8("released()")), self.ChangeLine.clear)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow", None))
self.Run.setText(_translate("MainWindow", "Run", None))
self.Script.setText(_translate("MainWindow", "Script", None))
self.Hosts.setText(_translate("MainWindow", "Hosts", None))
self.CHange.setText(_translate("MainWindow", "Change", None))
self.ScriptLine.setPlaceholderText(_translate("MainWindow", "Enter script file name", None))
self.HostLine.setPlaceholderText(_translate("MainWindow", "Enter Host file name", None))
self.ChangeLine.setPlaceholderText(_translate("MainWindow", "Enter Change file name", None))
self.Cla_2.setText(_translate("MainWindow", "Files to be used:", None))
self.label.setText(_translate("MainWindow", "Script:", None))
self.label_2.setText(_translate("MainWindow", "Hosts:", None))
self.label_3.setText(_translate("MainWindow", "Change:", None))
self.label_4.setText(_translate("MainWindow", "User Name", None))
self.label_5.setText(_translate("MainWindow", "Password", None))
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
MainWindow = QtGui.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
``` | 2013/07/25 | [
"https://Stackoverflow.com/questions/17846964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2616664/"
] | It's because you're doing [array dereferencing](http://schlueters.de/blog/archives/138-Features-in-PHP-trunk-Array-dereferencing.html) which is only available in PHP as of version 5.4. You have it locally but your webhost does not. That's why you should always make sure your development environment matches your production environment. | It because you're using something called array dereferencing which basically means that you can access a value from an array returned by a function direct. this is only supported in php>=5.4
To solve your issue, do something like this:
```
function pem2der($pem_data) {
$exploded = explode('-----', $pem_data);
$retStr = base64_decode(trim($exploded[2]));
return $retStr;
}
``` | 720 |
21,495,524 | How to read only the first line of ping results using python? On reading ping results with python returns multiple lines. So I like to know how to read and save just the 1st line of output? The code should not only work for ping but should work for tools like "ifstat" too, which again returns multiple line results. | 2014/02/01 | [
"https://Stackoverflow.com/questions/21495524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3259921/"
] | Run the command using subprocess.check\_output, and return the first of splitlines():
```
import subprocess
subprocess.check_output(['ping', '-c1', '192.168.0.1']).splitlines()[0]
```
Andreas | You can use [`subprocess.check_output`](http://docs.python.org/2/library/subprocess.html#subprocess.check_output) and [`str.splitlines`](http://docs.python.org/2/library/stdtypes.html#str.splitlines). Here [`subprocess.check_output`](http://docs.python.org/2/library/subprocess.html#subprocess.check_output) runs the command and returns the output in a string, and the you can get the first line using `str.splitlines()[0]`.
```
>>> import subprocess
>>> out = subprocess.check_output('ping google.com -c 1', shell=True)
>>> out.splitlines()[0]
'PING google.com (173.194.36.78) 56(84) bytes of data.'
```
Note that running a untrusted command with `shell=True` can be dangerous. So, a better way would be:
```
>>> import shlex
>>> command = shlex.split('ping google.com -c 1')
>>> out = subprocess.check_output(command)
>>> out.splitlines()[0]
'PING google.com (173.194.36.65) 56(84) bytes of data.'
``` | 721 |
52,745,705 | so I'm trying to create an AI just for fun, but I've run into a problem. Currently when you say `Hi` it will say `Hi` back. If you say something it doesn't know, like `Hello`, it will ask you to define it, and then add it to a dictionary variable `knowledge`. Then whenever you say `Hello`, it translates it into `Hi` and will say `Hi`.
But, I want it to loop through what you've defined as `hi` and say a random thing that means `hi`. So if you tell it `Hello`, `What's Up`, and `Greetings` all mean hi, it will work to say any of them and it will return `Hi`. But how would I make it say either `Hi`, `Hello`, `What's Up`, or `Greetings` once it knows them? (just examples)
I have tried this:
```
def sayHello():
for index, item in enumerate(knowledge):
if knowledge[item] == 'HI':
print(knowledge[index] + "! I'm Orion!")
```
However I get this error:
```
Traceback (most recent call last):
File "python", line 28, in <module>
File "python", line 12, in sayHello
KeyError: 0
``` | 2018/10/10 | [
"https://Stackoverflow.com/questions/52745705",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10481614/"
] | Firs thing you need is your initial dictionary with only `hi`. Then we say something to our friend. We check all the values, if our phrase is not in there, we ask for the phrase to be defined. We create a new key with that definition along with a default empty list. We then append the phase to that list. Else, we search which value list our phrase lies in and select a random word from that list.
```
from random import choice
knowledge = {'hi': ['hi']}
while True:
new = input('Say something to HAL: ')
check = list(*knowledge.values())
if new.lower() not in check:
key = input('Define {} for me: '.format(new))
knowledge.setdefault(key.lower(), [])
knowledge[key].append(new.lower())
else:
for k, v in knowledge.items():
if new.lower() in v:
print(choice(v).title())
```
>
>
> ```
> Say something to HAL: hi
> Hi
> Say something to HAL: hey
> Define hey for me: hi
> Say something to HAL: hey
> Hey
> Say something to HAL: hey
> Hi
>
> ```
>
> | You could do something like this:
```
knowledge = {"hi": ["hi"]}
```
And when your AI learns that `new_word` means the same as `"hi"`:
```
knowledge["hi"].append(new_word)
```
So now, if you now say hi to your AI (this uses the random module):
```
print(random.choice(knowledge["hi"]))
``` | 723 |
57,655,112 | I want to remove some unwanted tags/images from various repositories of azure container registry. I want to do all these programmatically. For example, what I need is:
* Authenticate with ACR
* List all repositories
* List all tags of each repository
* Remove unwanted images with particular tags.
Normally these operations can be done using Azure CLI and `az acr` commands. Maybe I can create a PowerShell script with `az acr` commands to accomplish this.
But can I do this with python? Is there something like Graph API to do these operations?
I found this API for ACR but allows to only delete entire registry. It doesn't allow repository-specific operations:
<https://learn.microsoft.com/en-us/rest/api/containerregistry/>
I tried with docker registry API:
<https://docs.docker.com/registry/spec/api/>
```
#!/bin/bash
export registry="myregistry.azurecr.io"
export user="myusername"
export password="mypassword"
export operation="/v2/_catalog"
export credentials=$(echo -n "$user:$password" | base64 -w 0)
export catalog=$(curl -s -H "Authorization: Basic $credentials" https://$registry$operation)
echo "Catalog"
echo $catalog
```
But an error is returned all the time:
```
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"registry","Name":"catalog","Action":"*"}]}]}
```
How can I properly authenticate with ACR before using Docker registry API? | 2019/08/26 | [
"https://Stackoverflow.com/questions/57655112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10259516/"
] | Would this work?
```py
>>> for header in soup.find_all('h3'):
... if header.get_text() == '64-bit deb for Ubuntu/Debian':
... header.find_next_sibling()
...
<table align="center" border="1" width="600">
:
</table>
``` | bs4 4.7.1 + you can use `:contains` with adjacent sibling (+) combinator. No need for a loop.
```
from bs4 import BeautifulSoup as bs
html = '''<h3>Windows 64-bit</h3>
<table width="600" border="1" align="center">
:
</table>
:
<h3>64-bit deb for Ubuntu/Debian</h3>
<table width="600" border="1" align="center">
:'''
soup = bs(html, 'lxml')
table = soup.select_one('h3:contains("64-bit deb for Ubuntu/Debian") + table')
``` | 724 |