qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
41,850,558 | I have a model called "document-detail-sample" and when you call it with a GET, something like this, **GET** `https://url/document-detail-sample/` then you get every "document-detail-sample".
Inside the model is the id. So, if you want every Id, you could just "iterate" on the list and ask for the id. Easy.
But... the front-end Developers don't want to do it :D they say it's too much work...
So, I gotta return the id list. :D
I was thinking something like **GET** `https://url/document-detail-sample/id-list`
But I don't know how to return just a list. I read [this post](https://stackoverflow.com/questions/27647871/django-python-how-to-get-a-list-of-ids-from-a-list-of-objects) and I know how to get the id\_list in the backend. But I don't know what should I implement to just return a list in that url...
the view that I have it's pretty easy:
```
class DocumentDetailSampleViewSet(viewsets.ModelViewSet):
queryset = DocumentDetailSample.objects.all()
serializer_class = DocumentDetailSampleSerializer
```
and the url is so:
```
router.register(r'document-detail-sample', DocumentDetailSampleViewSet)
```
so:
**1**- is a good Idea do it with an url like `.../document-detail-sample/id-list"` ?
**2**- if yes, how can I do it?
**3**- if not, what should I do then? | 2017/01/25 | [
"https://Stackoverflow.com/questions/41850558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4050960/"
] | You could use `@list_route` decorator
```
from rest_framework.decorators import detail_route, list_route
from rest_framework.response import Response
class DocumentDetailSampleViewSet(viewsets.ModelViewSet):
queryset = DocumentDetailSample.objects.all()
serializer_class = DocumentDetailSampleSerializer
@list_route()
def id_list(self, request):
q = self.get_queryset().values('id')
return Response(list(q))
```
This decorator allows you provide additional endpoint with the same name as a method. `/document-detail-sample/id_list/`
[reference to docs about extra actions in a viewset](http://www.django-rest-framework.org/api-guide/viewsets/#marking-extra-actions-for-routing) | Assuming you don't need pagination, just override the `list` method like so
```
class DocumentDetailSampleViewSet(viewsets.ModelViewSet):
queryset = DocumentDetailSample.objects.all()
serializer_class = DocumentDetailSampleSerializer
def list(self, request):
return Response(self.get_queryset().values_list("id", flat=True))
``` | 0 |
14,585,722 | Suppose you have a python function, as so:
```
def foo(spam, eggs, ham):
pass
```
You could call it using the positional arguments only (`foo(1, 2, 3)`), but you could also be explicit and say `foo(spam=1, eggs=2, ham=3)`, or mix the two (`foo(1, 2, ham=3)`).
Is it possible to get the same kind of functionality with argparse? I have a couple of positional arguments with keywords, and I don't want to define all of them when using just one. | 2013/01/29 | [
"https://Stackoverflow.com/questions/14585722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/731881/"
] | You can do something like this:
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('foo',nargs='?',default=argparse.SUPPRESS)
parser.add_argument('--foo',dest='foo',default=None)
parser.add_argument('bar',nargs='?',default=argparse.SUPPRESS)
parser.add_argument('--bar',dest='bar',default=None)
parser.add_argument('baz',nargs='?',default=argparse.SUPPRESS)
parser.add_argument('--baz',dest='baz',default=None)
print parser.parse_args()
```
which works mostly as you describe:
```
temp $ python test.py 1 2 --baz=3
Namespace(bar='2', baz='3', foo='1')
temp $ python test.py --baz=3
Namespace(bar=None, baz='3', foo=None)
temp $ python test.py --foo=2 --baz=3
Namespace(bar=None, baz='3', foo='2')
temp $ python test.py 1 2 3
Namespace(bar='2', baz='3', foo='1')
```
python would give you an error for the next one in the function call analogy, but argparse will allow it:
```
temp $ python test.py 1 2 3 --foo=27.5
Namespace(bar='2', baz='3', foo='27.5')
```
You could probably work around that by using [mutually exclusive groupings](http://docs.python.org/2.7/library/argparse.html#mutual-exclusion) | I believe this is what you are looking for [Argparse defaults](http://docs.python.org/dev/library/argparse.html#default) | 1 |
72,950,868 | I would like to add a closing parenthesis to strings that have an open parenthesis but are missing a closing parenthesis.
For instance, I would like to modify "The dog walked (ABC in the park" to be "The dog walked (ABC) in the park".
I found a similar question and solution but it is in Python ([How to add a missing closing parenthesis to a string in Python?](https://stackoverflow.com/questions/67400960/how-to-add-a-missing-closing-parenthesis-to-a-string-in-python)). I have tried to modify the code to be used in R but to no avail. Can someone help me with this please?
I have tried modifying the original python solution as R doesn't recognise the "r" and "\" has been replaced by "\\" but this solution doesn't work properly and does not capture the string preceded before the bracket I would like to add:
```
text = "The dog walked (ABC in the park"
str_replace_all(text, '\\([A-Z]+(?!\\))\\b', '\\)')
text
```
The python solution that works is as follows:
```
text = "The dog walked (ABC in the park"
text = re.sub(r'(\([A-Z]+(?!\))\b)', r"\1)", text)
print(text)
``` | 2022/07/12 | [
"https://Stackoverflow.com/questions/72950868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19533566/"
] | Try this
```
stringr::str_replace_all(text, '\\([A-Z]+(?!\\))\\b', '\\0\\)')
```
* output
```
"The dog walked (ABC) in the park"
``` | Not a one liner, but it does the trick and is (hopefully!) intuitive.
```
library(stringr)
add_brackets = function(text) {
brackets = str_extract(text, "\\([:alpha:]+") # finds the open bracket and any following letters
brackets_new = paste0(brackets, ")") # adds in the closing brackets
str_replace(text, paste0("\\", brackets), brackets_new) # replaces the unclosed string with the closed one
}
```
```
> add_brackets(text)
[1] "The dog walked (ABC) in the park"
``` | 4 |
67,609,973 | I chose to use Python 3.8.1 Azure ML in Azure Machine learning studio, but when i run the command
`!python train.py`, it uses python Anconda 3.6.9, when i downloaded python 3.8 and run the command `!python38 train.py` in the same dir as before, the response was `python3.8: can't open file` .
Any idea?
Also Python 3 in azure, is always busy, without anything running from my side.
Thank you. | 2021/05/19 | [
"https://Stackoverflow.com/questions/67609973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14915505/"
] | You should try adding a new Python 3.8 Kernel. Here and instructions how to add a new Kernel: <https://learn.microsoft.com/en-us/azure/machine-learning/how-to-access-terminal#add-new-kernels> | Yeah I understand your pain point, and I agree that calling bash commands in a notebook cell should execute in the same conda environment as the one associated with the selected kernel of the notebook. I think this is bug, I'll flag it to the notebook feature team, but I encourage you to open a priority support ticket if you want to ensure that your problem is addressed! | 7 |
58,483,706 | I am new to python and trying my hands on certain problems. I have a situation where I have 2 dataframe which I want to combine to achieve my desired dataframe.
I have tried .merge and .join, both of which was not able to get my desired outbcome.
let us suppose I have the below scenario:
```
lt = list(['a','b','c','d','a','b','a','b'])
df = pd.DataFrame(columns = lt)
data = [[10,11,12,12], [15,14,12,10]]
df1 = pd.DataFrame(data, columns = ['a','b','c','d'])
```
I want df and df1 to be combined and get desired dataframe as df2 as:
```
a b c d a b a b
0 10 11 12 12 10 11 10 11
1 15 14 12 10 15 14 15 14
``` | 2019/10/21 | [
"https://Stackoverflow.com/questions/58483706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11378087/"
] | If you don't mind the order of the columns changing, this is just a right join. The only caveat is that those are performed on rows rather than columns, so you need to transpose first:
```py
In [44]: df.T.join(df1.T, how='right').T
Out[44]:
a a a b b b c d
0 10 10 10 11 11 11 12 12
1 15 15 15 14 14 14 12 10
``` | Use [`concat()`](https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html)
```py
pd.concat([df, df1], axis=0, join='inner', sort=False)
a b c d a b a b
0 10 11 12 12 10 11 10 11
1 15 14 12 10 15 14 15 14
``` | 8 |
14,187,973 | Simmilar question (related with Python2: [Python: check if method is static](https://stackoverflow.com/questions/8727059/python-check-if-method-is-static))
Lets concider following class definition:
```
class A:
def f(self):
return 'this is f'
@staticmethod
def g():
return 'this is g'
```
In Python 3 there is no `instancemethod` anymore, everything is function, so the answer related to Python 2 will not work anymore.
As I told, everything is function, so we can call `A.f(0)`, but of course we cannot call `A.f()` (argument missmatch). But if we make an instance `a=A()` and we call `a.f()` Python passes to the function `A.f` the `self` as first argument. Calling `a.g()` prevents from sending it or captures the `self` - so there have to be a way to test if this is staticmethod or not.
So can we check in Python3 if a method was declared as `static` or not? | 2013/01/06 | [
"https://Stackoverflow.com/questions/14187973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/889902/"
] | ```
class A:
def f(self):
return 'this is f'
@staticmethod
def g():
return 'this is g'
print(type(A.__dict__['g']))
print(type(A.g))
<class 'staticmethod'>
<class 'function'>
``` | I needed this solution and wrote the following based on the answer from @root
```
def is_method_static(cls, method_name):
# http://stackoverflow.com/questions/14187973/python3-check-if-method-is-static
for c in cls.mro():
if method_name in c.__dict__:
return isinstance(c.__dict__[method_name], staticmethod)
raise RuntimeError("Unable to find %s in %s" % (method_name, cls.__name__))
``` | 10 |
46,132,431 | I have written code to generate numbers from 0500000000 to 0500000100:
```
def generator(nums):
count = 0
while count < 100:
gg=print('05',count, sep='')
count += 1
g = generator(10)
```
as I use linux, I thought I may be able to use this command `python pythonfilename.py >> file.txt`
Yet, I get an error.
So, before `g = generator(10)` I added:
```
with open('file.txt', 'w') as f:
f.write(gg)
f.close()
```
but I got an error:
>
> TypeError: write() argument must be str, not None
>
>
>
Any solution? | 2017/09/09 | [
"https://Stackoverflow.com/questions/46132431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5548783/"
] | Here I've assumed we're laying out two general images, rather than plots. If your images are actually plots you've created, then you can lay them out as a single image for display using `gridExtra::grid.arrange` for grid graphics or `par(mfrow=c(1,2))` for base graphics and thereby avoid the complications of laying out two separate images.
I'm not sure if there's a "natural" way to left justify the left-hand image and right-justify the right-hand image. As a hack, you could add a blank "spacer" image to separate the two "real" images and set the widths of each image to match paper-width minus 2\*margin-width.
Here's an example where the paper is assumed to be 8.5" wide and the right and left margins are each 1":
```
---
output: pdf_document
geometry: margin=1in
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE)
library(ggplot2)
library(knitr)
# Create a blank image to use for spacing
spacer = ggplot() + theme_void() + ggsave("spacer.png")
```
```{r, out.width=c('2.75in','1in','2.75in')}
include_graphics(c("Rplot59.png","spacer.png", "Rplot60.png"))
```
```
And here's what the document looks like:
[![enter image description here](https://i.stack.imgur.com/jiqHx.png)](https://i.stack.imgur.com/jiqHx.png) | Put them in the same code chunk and do not use align. Let them use html.
THis has worked for me.
```
````{r echo=FALSE, fig.height=3.0, fig.width=3.0}
#type your code here
ggplot(anscombe, aes(x=x1 , y=y1)) + geom_point()
+geom_smooth(method="lm") +
ggtitle("Results for x1 and y1 ")
ggplot(anscombe, aes(x=x2 , y=y2)) + geom_point() +geom_smooth(method="lm") +
ggtitle("Results for x2 and y2 ")
ggplot(anscombe, aes(x=x3 , y=y3)) + geom_point() +geom_smooth(method="lm") +
ggtitle("Results for x3 and y3 ")
ggplot(anscombe, aes(x=x4 , y=y4)) + geom_point() +geom_smooth(method="lm") +
ggtitle("Results for x4 and y4 ")
````
``` | 13 |
54,007,542 | input is like:
```
text="""Hi Team from the following Server :
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>"""
```
In output i want these 2 lines only, want to remove table tag with data in python:
Hi Team from the following Server :
Please archive the following Project Areas : | 2019/01/02 | [
"https://Stackoverflow.com/questions/54007542",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9901523/"
] | Use `BeautifulSoup` to parse HTML
**Ex:**
```
from bs4 import BeautifulSoup
text="""<p>Hi Team from the following Server :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>"""
soup = BeautifulSoup(text, "html.parser")
for p in soup.find_all("p"):
print(p.text)
```
**Output:**
```
Hi Team from the following Server :
Please archive the following Project Areas :
``` | You can use `HTMLParser` as demonstrated below:
```
from HTMLParser import HTMLParser
s = \
"""
<html>
<p>Hi Team from the following Server :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>
</html>
"""
# create a subclass and override the handler methods
class MyHTMLParser(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self._last_tag = ''
def handle_starttag(self, tag, attrs):
#print "Encountered a start tag:", tag
self._last_tag = tag
def handle_endtag(self, tag):
#print "Encountered an end tag :", tag
self._last_tag = ''
def handle_data(self, data):
#print "Encountered some data :", data
if self._last_tag == 'p':
print("<%s> tag data: %s" % (self._last_tag, data))
# instantiate the parser and fed it some HTML
parser = MyHTMLParser()
parser.feed(s)
```
Output:
```
<p> tag data: Hi Team from the following Server :
<p> tag data: Please archive the following Project Areas :
``` | 14 |
38,776,104 | I would like to redirect the standard error and standard output of a Python script to the same output file. From the terminal I could use
```
$ python myfile.py &> out.txt
```
to do the same task that I want, but I need to do it from the Python script itself.
I looked into the questions [Redirect subprocess stderr to stdout](https://stackoverflow.com/questions/11495783/redirect-subprocess-stderr-to-stdout), [How to redirect stderr in Python?](https://stackoverflow.com/questions/1956142/how-to-redirect-stderr-in-python), and Example 10.10 from [here](http://www.diveintopython.net/scripts_and_streams/stdin_stdout_stderr.html), and then I tried the following:
```
import sys
fsock = open('out.txt', 'w')
sys.stdout = sys.stderr = fsock
print "a"
```
which rightly prints the letter "a" in the file out.txt; however, when I try the following:
```
import sys
fsock = open('out.txt', 'w')
sys.stdout = sys.stderr = fsock
print "a # missing end quote, will give error
```
I get the error message "SyntaxError ..." on the terminal, but not in the file out.txt. What do I need to do to send the SyntaxError to the file out.txt? I do not want to write an Exception, because in that case I have to write too many Exceptions in the script. I am using Python 2.7.
Update: As pointed out in the answers and comments below, that SyntaxError will always output to screen, I replaced the line
```
print "a # missing end quote, will give error
```
by
```
print 1/0 # Zero division error
```
The ZeroDivisionError is output to file, as I wanted to have it in my question. | 2016/08/04 | [
"https://Stackoverflow.com/questions/38776104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1461999/"
] | This works
```
sys.stdout = open('out.log', 'w')
sys.stderr = sys.stdout
``` | A SyntaxError in a Python file like the above is raised before your program even begins to run:
Python files are compiled just like in any other compiled language - if the parser or compiler can't find sense in your Python file, no executable bytecode is generated, therefore the program does not run.
The correct way to have an exception generated on purpose in your code - from simple test cases like yours, up to implementing complex flow control patterns, is to use the Pyton command `raise`.
Just leave your print there, and a line like this at the end:
```
raise Exception
```
Then you can see that your trick will work.
Your program could fail in runtime in many other ways without an explict raise, like, if you force a division by 0, or simply try to use an unassigned (and therefore "undeclared") variable - but a deliberate SyntaxError will have the effect that the program never runs to start with - not even the first few lines. | 17 |
57,843,695 | I haven't changed my system configuration, But I'm spotting this error for the first time today.
I've reported it here: <https://github.com/jupyter/notebook/issues/4871>
```
> jupyter notebook
[I 10:44:20.102 NotebookApp] JupyterLab extension loaded from /usr/local/anaconda3/lib/python3.7/site-packages/jupyterlab
[I 10:44:20.102 NotebookApp] JupyterLab application directory is /usr/local/anaconda3/share/jupyter/lab
[I 10:44:20.104 NotebookApp] Serving notebooks from local directory: /Users/pi
[I 10:44:20.104 NotebookApp] The Jupyter Notebook is running at:
[I 10:44:20.104 NotebookApp] http://localhost:8888/?token=586797fb9049c0faea24f2583c4de32c08d45c89051fb07d
[I 10:44:20.104 NotebookApp] or http://127.0.0.1:8888/?token=586797fb9049c0faea24f2583c4de32c08d45c89051fb07d
[I 10:44:20.104 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 10:44:20.110 NotebookApp]
To access the notebook, open this file in a browser:
file:///Users/pi/Library/Jupyter/runtime/nbserver-65385-open.html
Or copy and paste one of these URLs:
http://localhost:8888/?token=586797fb9049c0faea24f2583c4de32c08d45c89051fb07d
or http://127.0.0.1:8888/?token=586797fb9049c0faea24f2583c4de32c08d45c89051fb07d
[E 10:44:21.457 NotebookApp] Could not open static file ''
[W 10:44:21.512 NotebookApp] 404 GET /static/components/react/react-dom.production.min.js (::1) 9.02ms referer=http://localhost:8888/tree?token=BLA
[W 10:44:21.548 NotebookApp] 404 GET /static/components/react/react-dom.production.min.js (::1) 0.99ms referer=http://localhost:8888/tree?token=BLA
Set
```
Looks like this issue was fixed in `Jupyter 6.0.1`
So the question becomes: can I force-install `jupyter 6.0.1`?
As the initial question has now provoked a second question, I now ask this new question here: [How to force `conda` to install the latest version of `jupyter`?](https://stackoverflow.com/questions/57843733/how-to-force-conda-to-install-the-latest-version-of-jupyter)
Alternatively I can manually provide the missing file, but I'm not sure *where*. I've asked here: [Where does Jupyter install site-packages on macOS?](https://stackoverflow.com/questions/57843888/where-does-jupyter-install-site-packages-on-macos)
Research:
=========
<https://github.com/jupyter/notebook/pull/4772> *"add missing react-dom js to package data #4772"* on 6 Aug 2019
>
> minrk added this to the 6.0.1 milestone on 18 Jul
>
>
>
Ok, so can I get Jupyter Notebook 6.0.1?
`brew cask install anaconda` downloads `~/Library/Caches/Homebrew/downloads/{LONG HEX}--Anaconda3-2019.07-MacOSX-x86_64` which is July, and `conda --version` reports `conda 4.7.10`. But this is for `Anaconda` which is the Package *Manager*.
```
> conda list | grep jupy
jupyter 1.0.0 py37_7
jupyter_client 5.3.1 py_0
jupyter_console 6.0.0 py37_0
jupyter_core 4.5.0 py_0
jupyterlab 1.0.2 py37hf63ae98_0
jupyterlab_server 1.0.0 py_0
```
So that's a bit confusing. No `jupyter notebook` here.
```
> which jupyter
/usr/local/anaconda3/bin/jupyter
> jupyter --version
jupyter core : 4.5.0
jupyter-notebook : 6.0.0
qtconsole : 4.5.1
ipython : 7.6.1
ipykernel : 5.1.1
jupyter client : 5.3.1
jupyter lab : 1.0.2
nbconvert : 5.5.0
ipywidgets : 7.5.0
nbformat : 4.4.0
traitlets : 4.3.2
```
Ok, so it appears `jupyter-notebook` is in `jupyter` which is maintained by Anaconda.
Can we update this?
<https://jupyter.readthedocs.io/en/latest/projects/upgrade-notebook.html>
```
> conda update jupyter
:
```
Alas `jupyter --version` is still `6.0.0` | 2019/09/08 | [
"https://Stackoverflow.com/questions/57843695",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/435129/"
] | I fixed this by updating both jupyter on pip and pip3 (just to be safe) and this fixed the problem
using both
>
> `pip install --upgrade jupyter`
>
>
>
and
>
> `pip3 install --upgrade jupyter --no-cache-dir`
>
>
>
I believe you can do this in the terminal as well as in conda's terminal (since conda envs also have pip) | As per [Where does Jupyter install site-packages on macOS?](https://stackoverflow.com/questions/57843888/where-does-jupyter-install-site-packages-on-macos), I locate where on my system `jupyter` is searching for this missing file:
```
> find / -path '*/static/components' 2>/dev/null
/usr/local/anaconda3/pkgs/notebook-6.0.0-py37_0/lib/python3.7/site-packages/notebook/static/components
/usr/local/anaconda3/lib/python3.7/site-packages/notebook/static/components
```
And as per <https://github.com/jupyter/notebook/pull/4772#issuecomment-515794823>, if I download that file and deposit it in the second location, i.e. creating:
```
/usr/local/anaconda3/lib/python3.7/site-packages/notebook/static/components/react/react-dom.production.min.js
```
... now `jupyter notebook` launches without errors.
(*NOTE: Being cautious I have also copied it into the first location. But that doesn't seem to have any effect.*) | 18 |
44,175,800 | Simple question: given a string
```
string = "Word1 Word2 Word3 ... WordN"
```
is there a pythonic way to do this?
```
firstWord = string.split(" ")[0]
otherWords = string.split(" ")[1:]
```
Like an unpacking or something?
Thank you | 2017/05/25 | [
"https://Stackoverflow.com/questions/44175800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2131783/"
] | Since Python 3 and [PEP 3132](https://www.python.org/dev/peps/pep-3132/), you can use extended unpacking.
This way, you can unpack arbitrary string containing any number of words. The first will be stored into the variable `first`, and the others will belong to the list (possibly empty) `others`.
```
first, *others = string.split()
```
Also, note that default delimiter for `.split()` is a space, so you do not need to specify it explicitly. | From [Extended Iterable Unpacking](https://www.python.org/dev/peps/pep-3132/).
Many algorithms require splitting a sequence in a "first, rest" pair, if you're using Python2.x, you need to try this:
```
seq = string.split()
first, rest = seq[0], seq[1:]
```
and it is replaced by the cleaner and probably more efficient in `Python3.x`:
```
first, *rest = seq
```
For more complex unpacking patterns, the new syntax looks even cleaner, and the clumsy index handling is not necessary anymore. | 19 |
28,717,067 | I am trying to place a condition after the for loop. It will print the word available if the retrieved rows is not equal to zero, however if I would be entering a value which is not stored in my database, it will return a message. My problem here is that, if I'd be inputting value that isn't stored on my database, it would not go to the else statement. I'm new to this. What would be my mistake in this function?
```
def search(title):
query = "SELECT * FROM books WHERE title = %s"
entry = (title,)
try:
conn = mysql.connector.connect(user='root', password='', database='python_mysql') # connect to the database server
cursor = conn.cursor()
cursor.execute(query, entry)
rows = cursor.fetchall()
for row in rows:
if row != 0:
print('Available')
else:
print('No available copies of the said book in the library')
except Error as e:
print(e)
finally:
cursor.close()
conn.close()
def main():
title = input("Enter book title: ")
search(title)
if __name__ == '__main__':
main()
``` | 2015/02/25 | [
"https://Stackoverflow.com/questions/28717067",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4529171/"
] | Quite apart from the 0/NULL confusion, your logic is wrong. If there are no matching rows, you won't get a 0 as the value of a row; in fact you won't get any rows at all, and you will never even get into the for loop.
A much better way to do this would be simply run a COUNT query, get the single result with `fetchone()`, and check that directly.
```
query = "SELECT COUNT(*) FROM books WHERE title = %s"
entry = (title,)
try:
conn = mysql.connector.connect(user='root', password='', database='python_mysql') # connect to the database server
cursor = conn.cursor()
cursor.execute(query, entry)
result = cursor.fetchone()
if result != 0:
print('Available')
else:
print('No available copies of the said book in the library')
``` | In python you should check for `None` not `NULL`. In your code you can just check for object, if it is not None then control should go inside `if` otherwise `else` will be executed
```
for row in rows:
if row:
print('Available')
else:
print('No available copies of the said book in the library')
```
`UPDATE after auther edited the question:`
Now in for loop you should check for column value not the whole `row`. If your column name is suppose `quantity` then `if` statement should be like this
```
if row["quantity"] != 0:
``` | 20 |
65,995,857 | I'm quite new to coding and I'm working on a math problem in python.
To solve it, I would like to extract the first 7 numbers from a string of one hundred 50-digit number (take first 7 numbers, skip 43 numbers, and then take the first 7 again). The numbers aren't separated in any way (just one long string).
Then I want to sum up those fifty seven-digit numbers which I have extracted.
How can I do this?
(I have written this code, but it only takes the first digit, I don't know any stepping/slicing methods to make it seven)
```py
number = """3710728753390210279879799822083759024651013574025046376937677490007126481248969700780504170182605387432498619952474105947423330951305812372661730962991942213363574161572522430563301811072406154908250230675882075393461711719803104210475137780632466768926167069662363382013637841838368417873436172675728112879812849979408065481931592621691275889832738442742289174325203219235894228767964876702721893184745144573600130643909116721685684458871160315327670386486105843025439939619828917593665686757934951621764571418565606295021572231965867550793241933316490635246274190492910143244581382266334794475817892575867718337217661963751590579239728245598838407582035653253593990084026335689488301894586282278288018119938482628201427819413994056758715117009439035398664372827112653829987240784473053190104293586865155060062958648615320752733719591914205172558297169388870771546649911559348760353292171497005693854370070576826684624621495650076471787294438377604532826541087568284431911906346940378552177792951453612327252500029607107508256381565671088525835072145876576172410976447339110607218265236877223636045174237069058518606604482076212098132878607339694128114266041808683061932846081119106155694051268969251934325451728388641918047049293215058642563049483624672216484350762017279180399446930047329563406911573244438690812579451408905770622942919710792820955037687525678773091862540744969844508330393682126183363848253301546861961243487676812975343759465158038628759287849020152168555482871720121925776695478182833757993103614740356856449095527097864797581167263201004368978425535399209318374414978068609844840309812907779179908821879532736447567559084803087086987551392711854517078544161852424320693150332599594068957565367821070749269665376763262354472106979395067965269474259770973916669376304263398708541052684708299085211399427365734116182760315001271653786073615010808570091499395125570281987460043753582903531743471732693212357815498262974255273730794953759765105305946966067683156574377167401875275889028025717332296191766687138199318110487701902712526768027607800301367868099252546340106163286652636270218540497705585629946580636237993140746255962240744869082311749777923654662572469233228109171419143028819710328859780666976089293863828502533340334413065578016127815921815005561868836468420090470230530811728164304876237919698424872550366387845831148769693215490281042402013833512446218144177347063783299490636259666498587618221225225512486764533677201869716985443124195724099139590089523100588229554825530026352078153229679624948164195386821877476085327132285723110424803456124867697064507995236377742425354112916842768655389262050249103265729672370191327572567528565324825826546309220705859652229798860272258331913126375147341994889534765745501184957014548792889848568277260777137214037988797153829820378303147352772158034814451349137322665138134829543829199918180278916522431027392251122869539409579530664052326325380441000596549391598795936352974615218550237130764225512118369380358038858490341698116222072977186158236678424689157993532961922624679571944012690438771072750481023908955235974572318970677254791506150550495392297953090112996751986188088225875314529584099251203829009407770775672113067397083047244838165338735023408456470580773088295917476714036319800818712901187549131054712658197623331044818386269515456334926366572897563400500428462801835170705278318394258821455212272512503275512160354698120058176216521282765275169129689778932238195734329339946437501907836945765883352399886755061649651847751807381688378610915273579297013376217784275219262340194239963916804498399317331273132924185707147349566916674687634660915035914677504995186714302352196288948901024233251169136196266227326746080059154747183079839286853520694694454072476841822524674417161514036427982273348055556214818971426179103425986472045168939894221798260880768528778364618279934631376775430780936333301898264209010848802521674670883215120185883543223812876952786713296124747824645386369930090493103636197638780396218407357239979422340623539380833965132740801111666627891981488087797941876876144230030984490851411606618262936828367647447792391803351109890697907148578694408955299065364044742557608365997664579509666024396409905389607120198219976047599490197230297649139826800329731560371200413779037855660850892521673093931987275027546890690370753941304265231501194809377245048795150954100921645863754710598436791786391670211874924319957006419179697775990283006991536871371193661495281130587638027841075444973307840789923115535562561142322423255033685442488917353448899115014406480203690680639606723221932041495354150312888033953605329934036800697771065056663195481234880673210146739058568557934581403627822703280826165707739483275922328459417065250945123252306082291880205877731971983945018088807242966198081119777158542502016545090413245809786882778948721859617721078384350691861554356628840622574736922845095162084960398013400172393067166682355524525280460972253503534226472524250874054075591789781264330331690"""
first_digits = list(number[::50])
first_digits_int = list(map(int, first_digits))
result = 0
for n in first_digits_int:
result += n
print(result)
``` | 2021/02/01 | [
"https://Stackoverflow.com/questions/65995857",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15117090/"
] | Python allows you to iterate over a range with custom step sizes. So that should be allow you to do something like:
```py
your_list = []
for idx in range(0, len(string), 50): # Indexes 0, 50, 100, so on
first_seven_digits = string[idx:idx+7] # Say, "1234567"
str_to_int = int(first_seven_digits) # Converts to the number 1234567
your_list.append(str_to_int) # Add the number to the list
your_sum = sum(your_list) # Find the sum
```
You store the numbers made up of those first 7 digits in a list, and finally, sum them up. | first of all your number string is 4999 characters long so you'll have to add one. secondly if you want to use numpy you could make a 100 by 50 array by reshaping the original 5000 long array. like this
```
arr = np.array(list(number)).reshape(100, 50)
```
than you can slice the arr in a way that the first 7 elements the arrays second axis and all of the first. like this
```
nums = arr[:, :7]
```
than you can just construct your result list by iterating over every element of nums and joining all the chars to a list like so and sum there integers together
```
res = sum([int("".join(n)) for n in nums])
```
so if we putt all that together we get
```
import numpy as np
number = """37107287533902102798797998228083759024651013574025046376937677490007126481248969700780504170182605387432498619952474105947423330951305812372661730962991942213363574161572522430563301811072406154908250230675882075393461711719803104210475137780632466768926167069662363382013637841838368417873436172675728112879812849979408065481931592621691275889832738442742289174325203219235894228767964876702721893184745144573600130643909116721685684458871160315327670386486105843025439939619828917593665686757934951621764571418565606295021572231965867550793241933316490635246274190492910143244581382266334794475817892575867718337217661963751590579239728245598838407582035653253593990084026335689488301894586282278288018119938482628201427819413994056758715117009439035398664372827112653829987240784473053190104293586865155060062958648615320752733719591914205172558297169388870771546649911559348760353292171497005693854370070576826684624621495650076471787294438377604532826541087568284431911906346940378552177792951453612327252500029607107508256381565671088525835072145876576172410976447339110607218265236877223636045174237069058518606604482076212098132878607339694128114266041808683061932846081119106155694051268969251934325451728388641918047049293215058642563049483624672216484350762017279180399446930047329563406911573244438690812579451408905770622942919710792820955037687525678773091862540744969844508330393682126183363848253301546861961243487676812975343759465158038628759287849020152168555482871720121925776695478182833757993103614740356856449095527097864797581167263201004368978425535399209318374414978068609844840309812907779179908821879532736447567559084803087086987551392711854517078544161852424320693150332599594068957565367821070749269665376763262354472106979395067965269474259770973916669376304263398708541052684708299085211399427365734116182760315001271653786073615010808570091499395125570281987460043753582903531743471732693212357815498262974255273730794953759765105305946966067683156574377167401875275889028025717332296191766687138199318110487701902712526768027607800301367868099252546340106163286652636270218540497705585629946580636237993140746255962240744869082311749777923654662572469233228109171419143028819710328859780666976089293863828502533340334413065578016127815921815005561868836468420090470230530811728164304876237919698424872550366387845831148769693215490281042402013833512446218144177347063783299490636259666498587618221225225512486764533677201869716985443124195724099139590089523100588229554825530026352078153229679624948164195386821877476085327132285723110424803456124867697064507995236377742425354112916842768655389262050249103265729672370191327572567528565324825826546309220705859652229798860272258331913126375147341994889534765745501184957014548792889848568277260777137214037988797153829820378303147352772158034814451349137322665138134829543829199918180278916522431027392251122869539409579530664052326325380441000596549391598795936352974615218550237130764225512118369380358038858490341698116222072977186158236678424689157993532961922624679571944012690438771072750481023908955235974572318970677254791506150550495392297953090112996751986188088225875314529584099251203829009407770775672113067397083047244838165338735023408456470580773088295917476714036319800818712901187549131054712658197623331044818386269515456334926366572897563400500428462801835170705278318394258821455212272512503275512160354698120058176216521282765275169129689778932238195734329339946437501907836945765883352399886755061649651847751807381688378610915273579297013376217784275219262340194239963916804498399317331273132924185707147349566916674687634660915035914677504995186714302352196288948901024233251169136196266227326746080059154747183079839286853520694694454072476841822524674417161514036427982273348055556214818971426179103425986472045168939894221798260880768528778364618279934631376775430780936333301898264209010848802521674670883215120185883543223812876952786713296124747824645386369930090493103636197638780396218407357239979422340623539380833965132740801111666627891981488087797941876876144230030984490851411606618262936828367647447792391803351109890697907148578694408955299065364044742557608365997664579509666024396409905389607120198219976047599490197230297649139826800329731560371200413779037855660850892521673093931987275027546890690370753941304265231501194809377245048795150954100921645863754710598436791786391670211874924319957006419179697775990283006991536871371193661495281130587638027841075444973307840789923115535562561142322423255033685442488917353448899115014406480203690680639606723221932041495354150312888033953605329934036800697771065056663195481234880673210146739058568557934581403627822703280826165707739483275922328459417065250945123252306082291880205877731971983945018088807242966198081119777158542502016545090413245809786882778948721859617721078384350691861554356628840622574736922845095162084960398013400172393067166682355524525280460972253503534226472524250874054075591789781264330331690"""
arr = np.array(list(number)).reshape(100, 50)
nums = arr[:, :7]
res = sum([int("".join(n)) for n in nums])
print(res)
``` | 22 |
21,307,128 | Since I have to mock a static method, I am using **Power Mock** to test my application.
My application uses \**Camel 2.1*\*2.
I define routes in *XML* that is read by *camel-spring* context.
There were no issues when `Junit` alone was used for testing.
While using power mock, I get the error listed at the end of the post.
I have also listed the XML used.
*Camel* is unable to recognize any of its tags when power mock is used.
I wonder whether the byte-level manipulation done by power mock to mock static methods interferes with camel engine in some way. Let me know what could possibly be wrong.
PS:
The problem disappears if I do not use power mock.
+++++++++++++++++++++++++ Error +++++++++++++++++++++++++++++++++++++++++++++++++
```
[ main] CamelNamespaceHandler DEBUG Using org.apache.camel.spring.CamelContextFactoryBean as CamelContextBeanDefinitionParser
org.springframework.beans.factory.BeanDefinitionStoreException: Failed to parse JAXB element; nested exception is javax.xml.bind.UnmarshalException: unexpected element (uri:"http://camel.apache.org/schema/spring", local:"camelContext"). Expected elements are <{}aggregate>,<{}aop>,<{}avro>,<{}base64>,<{}batchResequencerConfig>,<{}bean>,<{}beanPostProcessor>,<{}beanio>,<{}bindy>,<{}camelContext>,<{}castor>,<{}choice>,<{}constant>,<{}consumerTemplate>,<{}contextScan>,<{}convertBodyTo>,<{}crypto>,<{}csv>,<{}customDataFormat>,<{}customLoadBalancer>,<{}dataFormats>,<{}delay>,<{}description>,<{}doCatch>,<{}doFinally>,<{}doTry>,<{}dynamicRouter>,<{}el>,<{}endpoint>,<{}enrich>,<{}errorHandler>,<{}export>,<{}expression>,<{}expressionDefinition>,<{}failover>,<{}filter>,<{}flatpack>,<{}from>,<{}groovy>,<{}gzip>,<{}header>,<{}hl7>,<{}idempotentConsumer>,<{}inOnly>,<{}inOut>,<{}intercept>,<{}interceptFrom>,<{}interceptToEndpoint>,<{}javaScript>,<{}jaxb>,<{}jibx>,<{}jmxAgent>,<{}json>,<{}jxpath>,<{}keyStoreParameters>,<{}language>,<{}loadBalance>,<{}log>,<{}loop>,<{}marshal>,<{}method>,<{}multicast>,<{}mvel>,<{}ognl>,<{}onCompletion>,<{}onException>,<{}optimisticLockRetryPolicy>,<{}otherwise>,<{}packageScan>,<{}pgp>,<{}php>,<{}pipeline>,<{}policy>,<{}pollEnrich>,<{}process>,<{}properties>,<{}property>,<{}propertyPlaceholder>,<{}protobuf>,<{}proxy>,<{}python>,<{}random>,<{}recipientList>,<{}redeliveryPolicy>,<{}redeliveryPolicyProfile>,<{}ref>,<{}removeHeader>,<{}removeHeaders>,<{}removeProperty>,<{}resequence>,<{}rollback>,<{}roundRobin>,<{}route>,<{}routeBuilder>,<{}routeContext>,<{}routeContextRef>,<{}routes>,<{}routingSlip>,<{}rss>,<{}ruby>,<{}sample>,<{}secureRandomParameters>,<{}secureXML>,<{}serialization>,<{}setBody>,<{}setExchangePattern>,<{}setFaultBody>,<{}setHeader>,<{}setOutHeader>,<{}setProperty>,<{}simple>,<{}soapjaxb>,<{}sort>,<{}spel>,<{}split>,<{}sql>,<{}sslContextParameters>,<{}sticky>,<{}stop>,<{}streamCaching>,<{}streamResequencerConfig>,<{}string>,<{}syslog>,<{}template>,<{}threadPool>,<{}threadPoolProfile>,<{}threads>,<{}throttle>,<{}throwException>,<{}tidyMarkup>,<{}to>,<{}tokenize>,<{}topic>,<{}transacted>,<{}transform>,<{}unmarshal>,<{}validate>,<{}vtdxml>,<{}weighted>,<{}when>,<{}wireTap>,<{}xmlBeans>,<{}xmljson>,<{}xmlrpc>,<{}xpath>,<{}xquery>,<{}xstream>,<{}zip>,<{}zipFile> at org.apache.camel.spring.handler.CamelNamespaceHandler.parseUsingJaxb(CamelNamespaceHandler.java:169)
at org.apache.camel.spring.handler.CamelNamespaceHandler$CamelContextBeanDefinitionParser.doParse(CamelNamespaceHandler.java:307)
at org.springframework.beans.factory.xml.AbstractSingleBeanDefinitionParser.parseInternal(AbstractSingleBeanDefinitionParser.java:85)
at org.springframework.beans.factory.xml.AbstractBeanDefinitionParser.parse(AbstractBeanDefinitionParser.java:59)
at org.springframework.beans.factory.xml.NamespaceHandlerSupport.parse(NamespaceHandlerSupport.java:73)
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1438)
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1428)
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:185)
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:139)
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:108)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:493)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:390)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:334)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:302)
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:174)
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:209)
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:180)
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:243)
at org.springframework.context.support.AbstractXmlApplicationContext.loadBeanDefinitions(AbstractXmlApplicationContext.java:127)
at org.springframework.context.support.AbstractXmlApplicationContext.loadBeanDefinitions(AbstractXmlApplicationContext.java:93)
at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:130)
at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:537)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:451)
at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:139)
at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:83)
at org.apache.camel.spring.SpringCamelContext.springCamelContext(SpringCamelContext.java:100)
at com.ericsson.bss.edm.integrationFramework.Context.<init>(Context.java:50)
at com.ericsson.bss.edm.integrationFramework.RouteEngine.main(RouteEngine.java:55)
at com.ericsson.bss.edm.integrationFramework.RouteEngineTest.testMultiRouteCondition(RouteEngineTest.java:174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:66)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:312)
at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:86)
at org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:94)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:296)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:112)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:73)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:284)
at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:84)
at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:49)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:209)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:148)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:122)
at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
at org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:102)
at org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53)
at org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:42)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:24)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
at org.junit.runner.JUnitCore.run(JUnitCore.java:136)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:62)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Caused by: javax.xml.bind.UnmarshalException: unexpected element (uri:"http://camel.apache.org/schema/spring", local:"camelContext"). Expected elements are <{}aggregate>,<{}aop>,<{}avro>,<{}base64>,<{}batchResequencerConfig>,<{}bean>,<{}beanPostProcessor>,<{}beanio>,<{}bindy>,<{}camelContext>,<{}castor>,<{}choice>,<{}constant>,<{}consumerTemplate>,<{}contextScan>,<{}convertBodyTo>,<{}crypto>,<{}csv>,<{}customDataFormat>,<{}customLoadBalancer>,<{}dataFormats>,<{}delay>,<{}description>,<{}doCatch>,<{}doFinally>,<{}doTry>,<{}dynamicRouter>,<{}el>,<{}endpoint>,<{}enrich>,<{}errorHandler>,<{}export>,<{}expression>,<{}expressionDefinition>,<{}failover>,<{}filter>,<{}flatpack>,<{}from>,<{}groovy>,<{}gzip>,<{}header>,<{}hl7>,<{}idempotentConsumer>,<{}inOnly>,<{}inOut>,<{}intercept>,<{}interceptFrom>,<{}interceptToEndpoint>,<{}javaScript>,<{}jaxb>,<{}jibx>,<{}jmxAgent>,<{}json>,<{}jxpath>,<{}keyStoreParameters>,<{}language>,<{}loadBalance>,<{}log>,<{}loop>,<{}marshal>,<{}method>,<{}multicast>,<{}mvel>,<{}ognl>,<{}onCompletion>,<{}onException>,<{}optimisticLockRetryPolicy>,<{}otherwise>,<{}packageScan>,<{}pgp>,<{}php>,<{}pipeline>,<{}policy>,<{}pollEnrich>,<{}process>,<{}properties>,<{}property>,<{}propertyPlaceholder>,<{}protobuf>,<{}proxy>,<{}python>,<{}random>,<{}recipientList>,<{}redeliveryPolicy>,<{}redeliveryPolicyProfile>,<{}ref>,<{}removeHeader>,<{}removeHeaders>,<{}removeProperty>,<{}resequence>,<{}rollback>,<{}roundRobin>,<{}route>,<{}routeBuilder>,<{}routeContext>,<{}routeContextRef>,<{}routes>,<{}routingSlip>,<{}rss>,<{}ruby>,<{}sample>,<{}secureRandomParameters>,<{}secureXML>,<{}serialization>,<{}setBody>,<{}setExchangePattern>,<{}setFaultBody>,<{}setHeader>,<{}setOutHeader>,<{}setProperty>,<{}simple>,<{}soapjaxb>,<{}sort>,<{}spel>,<{}split>,<{}sql>,<{}sslContextParameters>,<{}sticky>,<{}stop>,<{}streamCaching>,<{}streamResequencerConfig>,<{}string>,<{}syslog>,<{}template>,<{}threadPool>,<{}threadPoolProfile>,<{}threads>,<{}throttle>,<{}throwException>,<{}tidyMarkup>,<{}to>,<{}tokenize>,<{}topic>,<{}transacted>,<{}transform>,<{}unmarshal>,<{}validate>,<{}vtdxml>,<{}weighted>,<{}when>,<{}wireTap>,<{}xmlBeans>,<{}xmljson>,<{}xmlrpc>,<{}xpath>,<{}xquery>,<{}xstream>,<{}zip>,<{}zipFile>
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleEvent(UnmarshallingContext.java:647)
at com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:258)
at com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:253)
at com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportUnexpectedChildElement(Loader.java:120)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext$DefaultRootLoader.childElement(UnmarshallingContext.java:1052)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext._startElement(UnmarshallingContext.java:483)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.startElement(UnmarshallingContext.java:464)
at com.sun.xml.bind.v2.runtime.unmarshaller.InterningXmlVisitor.startElement(InterningXmlVisitor.java:75)
at com.sun.xml.bind.v2.runtime.unmarshaller.SAXConnector.startElement(SAXConnector.java:152)
at com.sun.xml.bind.unmarshaller.DOMScanner.visit(DOMScanner.java:244)
at com.sun.xml.bind.unmarshaller.DOMScanner.scan(DOMScanner.java:127)
at com.sun.xml.bind.unmarshaller.DOMScanner.scan(DOMScanner.java:105)
at com.sun.xml.bind.v2.runtime.BinderImpl.associativeUnmarshal(BinderImpl.java:161)
at com.sun.xml.bind.v2.runtime.BinderImpl.unmarshal(BinderImpl.java:132)
at org.apache.camel.spring.handler.CamelNamespaceHandler.parseUsingJaxb(CamelNamespaceHandler.java:167)
... 72 more
```
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++ Route.xml +++++++++++++++++++++++++++++++++++++++++++++
```
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://camel.apache.org/schema/spring
http://camel.apache.org/schema/spring/camel-spring.xsd">
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route id="simpleroute">
<from uri="ftp://admin@x.y.z.a:2121/?password=admin&noop=true&maximumReconnectAttempts=3&download=false&delay=2000&throwExceptionOnConnectFailed=true;"/>
<to uri="file:/home/emeensa/NetBeansProjects/CamelFileCopier/output" />
</route>
</camelContext>
</beans>
```
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ | 2014/01/23 | [
"https://Stackoverflow.com/questions/21307128",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2345966/"
] | This error message usually means that your specified truststore can not be read. What I would check:
* Is the path correct? (I'm sure you checked this...)
* Has the user who started the JVM enough access privileges to read the
trustore?
* When do you set the system properties? Are they already set when the webservice is invoked?
* Perhaps another component has overridden the values. Are the system properties still set when the webservice is invoked?
* Does the trustore contains the Salesforce certificate and is the file not corrupt (e.g. check with `keytool -list`)?
**Edit:**
* Don't use `System.setProperty` but set the options when starting the Java process with `-Djavax.net.ssl.XXX`. The reason for this advice is as follows: The IBM security framework may read the options **before** you set the property (e.g. in a `static` block of a class). Of course this is framework specific and may change from version to version. | ```
Caused by: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
```
>
> * In my case, I have 2 duplicate Java installations (OpenJDK and
> JDK-17).
> * I installed JDK-17 after configuring environment variable for OpenJDK and before uninstalling OpenJDK.
> * So, maybe that is the problem.
>
>
>
This is how I SOLVED it **in my case:**
* First, I have completely removed openJDK and JDK-17 from my computer (including JDK-17/lib/security/cacerts).
* Then, I deleted the java environment variable and restarted the computer.
* Next, I thoroughly checked that there aren't any JDKs on the computer anymore.
* Finally, I just reinstalled JDK-17 (JDK-17/lib/security/cacerts is default). And it worked fine for me.
**Note:** kill any Java runtime tasks before uninstalling them. | 23 |
49,059,660 | I am looking for a simple way to constantly monitor a log file, and send me an email notification every time thhis log file has changed (new lines have been added to it).
The system runs on a Raspberry Pi 2 (OS Raspbian /Debian Stretch) and the log monitors a GPIO python script running as daemon.
I need something very simple and lightweight, don't even care to have the text of the new log entry, because I know what it says, it is always the same. 24 lines of text at the end.
Also, the log.txt file gets recreated every day at midnight, so that might represent another issue.
I already have a working python script to send me a simple email via gmail (called it sendmail.py)
What I tried so far was creating and running the following bash script:
monitorlog.sh
`#!/bin/bash
tail -F log.txt | python ./sendmail.py`
The problem is that it just sends an email every time I execute it, but when the log actually changes, it just quits.
I am really new to linux so apologies if I missed something.
Cheers | 2018/03/01 | [
"https://Stackoverflow.com/questions/49059660",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9431262/"
] | You asked for simple:
```
#!/bin/bash
cur_line_count="$(wc -l myfile.txt)"
while true
do
new_line_count="$(wc -l myfile.txt)"
if [ "$cur_line_count" != "$new_line_count" ]
then
python ./sendmail.py
fi
cur_line_count="$new_line_count"
sleep 5
done
``` | I've done this a bunch of different ways. If you run a cron job every minute that counts the number of lines (wc -l) compares that to a stored count (e.g. in /tmp/myfilecounter) and sends the emails when the numbers are different.
If you have inotify, there are more direct ways to get "woken up" when the file changes, e.g <https://serverfault.com/a/780522/97447> or <https://serverfault.com/search?q=inotifywait>.
If you don't mind adding a package to the system, incron is a very convenient way to run a script whenever a file or directory is modified, and it looks like it's supported on raspbian (internally it uses inotify). <https://www.linux.com/learn/how-use-incron-monitor-important-files-and-folders>. Looks like it's as simple as:
```
sudo apt-get install incron
sudo vi /etc/incron.allow # Add your userid to this file (or just rm /etc/incron.allow to let everyone use incron)
incron -e # Add the following line to the "cron" file
/path/to/log.txt IN_MODIFY python ./sendmail.py
```
And you'd be done! | 24 |
56,794,886 | guys! So I recently started learning about python classes and objects.
For instance, I have a following list of strings:
```
alist = ["Four", "Three", "Five", "One", "Two"]
```
Which is comparable to a class of Numbers I have:
```
class Numbers(object):
One=1
Two=2
Three=3
Four=4
Five=5
```
How could I convert `alist` into
```
alist = [4, 3, 5, 1, 2]
```
based on the class above?
My initial thought was to create a new (empty) list and use a `for loop` that adds the corresponding object value (e.g. `Numbers.One`) to the empty list as it goes through `alist`. But I'm unsure whether that'd be the most efficient solution.
Therefore, I was wondering if there was a simpler way of completing this task using Python Classes / Inheritance.
I hope someone can help me and explain to me what way would work better and why!
Thank you!! | 2019/06/27 | [
"https://Stackoverflow.com/questions/56794886",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10713538/"
] | If you are set on using the class, one way would be to use [`__getattribute__()`](https://docs.python.org/3/reference/datamodel.html#object.__getattribute__)
```
print([Numbers().__getattribute__(a) for a in alist])
#[4, 3, 5, 1, 2]
```
But a much better (and more pythonic IMO) way would be to use a `dict`:
```
NumbersDict = dict(
One=1,
Two=2,
Three=3,
Four=4,
Five=5
)
print([NumbersDict[a] for a in alist])
#[4, 3, 5, 1, 2]
``` | **EDIT:** I suppose that the words and numbers are just a trivial example, a dictionary is the right way to do it if that's not the case as written in the comments.
Your assumptions are correct - either create an empty list and populate it using for loop, or use list comprehension with a for loop to create a new list with the required elements.
Empty list with for loop
========================
```py
#... Numbers class defined above
alist = ["Four", "Three", "Five", "One", "Two"]
nlist = []
numbers = Numbers()
for anumber in alist:
nlist.append(getattr(numbers, anumber))
print(nlist)
[4, 3, 5, 1, 2]
```
List comprehension with for loop
================================
```py
#... Numbers class defined above
alist = ["Four", "Three", "Five", "One", "Two"]
numbers = Numbers()
nlist = [getattr(numbers, anumber) for anumber in alist]
print(nlist)
[4, 3, 5, 1, 2]
``` | 25 |
36,108,377 | I want to count the number of times a word is being repeated in the review string
I am reading the csv file and storing it in a python dataframe using the below line
```
reviews = pd.read_csv("amazon_baby.csv")
```
The code in the below lines work when I apply it to a single review.
```
print reviews["review"][1]
a = reviews["review"][1].split("disappointed")
print a
b = len(a)
print b
```
The output for the above lines were
```
it came early and was not disappointed. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.
['it came early and was not ', '. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.']
2
```
When I apply the same logic to the entire dataframe using the below line. I receive an error message
```
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
```
Error message:
```
Traceback (most recent call last):
File "C:/Users/gouta/PycharmProjects/MLCourse1/Classifier.py", line 12, in <module>
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
File "C:\Users\gouta\Anaconda2\lib\site-packages\pandas\core\generic.py", line 2360, in __getattr__
(type(self).__name__, name))
AttributeError: 'Series' object has no attribute 'split'
``` | 2016/03/19 | [
"https://Stackoverflow.com/questions/36108377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2861976/"
] | You're trying to split the entire review column of the data frame (which is the Series mentioned in the error message). What you want to do is apply a function to each row of the data frame, which you can do by calling [apply](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html) on the data frame:
```
f = lambda x: len(x["review"].split("disappointed")) -1
reviews["disappointed"] = reviews.apply(f, axis=1)
``` | Well, the problem is with:
```
reviews["review"]
```
The above is a Series. In your first snippet, you are doing this:
```
reviews["review"][1].split("disappointed")
```
That is, you are putting an index for the review. You could try looping over all rows of the column and perform your desired action. For example:
```
for index, row in reviews.iterrows():
print len(row['review'].split("disappointed"))
``` | 28 |
72,329,252 | Let's say we have following list. This list contains response times of a REST server in a traffic run.
[1, 2, 3, 3, 4, 5, 6, 7, 9, 1]
I need following output
Percentage of the requests served within a certain time (ms)
50% 3
60% 4
70% 5
80% 6
90% 7
100% 9
How can we get it done in python? This is apache bench kind of output. So basically lets say at 50%, we need to find point in list below which 50% of the list elements are present and so on. | 2022/05/21 | [
"https://Stackoverflow.com/questions/72329252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4137009/"
] | You can try something like this:
```
responseTimes = [1, 2, 3, 3, 4, 5, 6, 7, 9, 1]
for time in range(3,10):
percentage = len([x for x in responseTimes if x <= time])/(len(responseTimes))
print(f'{percentage*100}%')
```
>
> *"So basically lets say at 50%, we need to find point in list below which 50% of the list elements are present and so on"*
>
>
>
```
responseTimes = [1, 2, 3, 3, 4, 5, 6, 7, 9, 1]
percentage = 0
time = 0
while(percentage <= 0.5):
percentage = len([x for x in responseTimes if x <= time])/(len(responseTimes))
time+=1
print(f'Every time under {time}(ms) occurrs lower than 50% of the time')
``` | You basically need to compute the cumulative ratio of the sorted response times.
```py
from collections import Counter
values = [1, 2, 3, 3, 4, 5, 6, 7, 9, 1]
frequency = Counter(values) # {1: 2, 2: 1, 3: 2, ...}
total = 0
n = len(values)
for time in sorted(frequency):
total += frequency[time]
print(time, f'{100*total/n}%')
```
This will print all times with the corresponding ratios.
```py
1 20.0%
2 30.0%
3 50.0%
4 60.0%
5 70.0%
6 80.0%
7 90.0%
9 100.0%
``` | 33 |
50,239,640 | In python have three one dimensional arrays of different shapes (like the ones given below)
```
a0 = np.array([5,6,7,8,9])
a1 = np.array([1,2,3,4])
a2 = np.array([11,12])
```
I am assuming that the array `a0` corresponds to an index `i=0`, `a1` corresponds to index `i=1` and `a2` corresponds to `i=2`. With these assumptions I want to construct a new two dimensional array where the rows would correspond to indices of the arrays (`i=0,1,2`) and the columns would be entries of the arrays `a0, a1, a2`.
In the example that I have given here, I will like the two dimensional array to look like
```
result = np.array([ [0,5], [0,6], [0,7], [0,8], [0,9], [1,1], [1,2],\
[1,3], [1,4], [2,11], [2,12] ])
```
I will very appreciate to have an answer as to how I can achieve this. In the actual problem that I am working with, I am dealing more than three one dimensional arrays. So, it will be very nice if the answer gives consideration to this. | 2018/05/08 | [
"https://Stackoverflow.com/questions/50239640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3761166/"
] | You can use `numpy` stack functions to speed up:
```
aa = [a0, a1, a2]
np.hstack(tuple(np.vstack((np.full(ai.shape, i), ai)) for i, ai in enumerate(aa))).T
``` | One way to do this would be a simple list comprehension:
```
result = np.array([[i, arr_v] for i, arr in enumerate([a0, a1, a2])
for arr_v in arr])
>>> result
array([[ 0, 5],
[ 0, 6],
[ 0, 7],
[ 0, 8],
[ 0, 9],
[ 1, 1],
[ 1, 2],
[ 1, 3],
[ 1, 4],
[ 2, 11],
[ 2, 12]])
```
Adressing your concern about scaling this to more arrays, you can easily add as many arrays as you wish by simply creating a list of your array names, and using that list as the argument to `enumerate`:
```
.... for i, arr in enumerate(my_list_of_arrays) ...
``` | 34 |
45,939,564 | I am accessing a python file via python.
The google sheets looks like the following:
[![enter image description here](https://i.stack.imgur.com/eIW7v.png)](https://i.stack.imgur.com/eIW7v.png)
But when I access it via:
```
self.probe=[]
self.scope = ['https://spreadsheets.google.com/feeds']
self.creds = ServiceAccountCredentials.from_json_keyfile_name('client_secret.json', self.scope)
self.client = gspread.authorize(self.creds)
self.sheet = self.client.open('Beziehende').sheet1
self.probe = self.sheet.get_all_records()
print(self.probe)
```
it results in [![enter image description here](https://i.stack.imgur.com/2tHia.png)](https://i.stack.imgur.com/2tHia.png)
Ho can I get the results in the same order as they are written in the google sheet?
Thank you for your help.
**Edit** Sorry, here are some more information. My program has two functions:
1.) It can check if a name / address etc. is already in the database. If the name is in the database, it prints all the information about that person.
2.) It lets me add people's information to the database.
**The Problem**: I am loading the whole database into the list and later writing it all back. But when writing it back, the order gets messed up, as the get\_all\_records stored it in a random order. (This is the very first program I have ever written by myself, so please forgive the bad coding).
I wanted to know if there is a possibility to get the data in order. but if not, than I just have to find a way, online to write the newest entry (which is probably more efficient anyway I guess...)
```
def create_window(self):
self.t = Toplevel(self)
self.t.geometry("250x150")
Message(self.t, text="Name", width=100, anchor=W).grid(row=1, column=1)
self.name_entry = Entry(self.t)
self.name_entry.grid(row=1, column=2)
Message(self.t, text="Adresse", width=100, anchor=W).grid(row=2, column=1)
self.adr_entry = Entry(self.t)
self.adr_entry.grid(row=2, column=2)
Message(self.t, text="Organisation", width=100, anchor=W).grid(row=3, column=1)
self.org_entry = Entry(self.t)
self.org_entry.grid(row=3, column=2)
Message(self.t, text="Datum", width=100, anchor=W).grid(row=4, column=1)
self.date_entry = Entry(self.t)
self.date_entry.grid(row=4, column=2)
self.t.button = Button(self.t, text="Speichern", command=self.verify).grid(row=5, column=2)
#name
#window = Toplevel(self.insert_window)
def verify(self):
self.ver = Toplevel(self)
self.ver.geometry("300x150")
self.ver.grid_columnconfigure(1, minsize=100)
Message(self.ver, text=self.name_entry.get(), width=100).grid(row=1, column=1)
Message(self.ver, text=self.adr_entry.get(), width=100).grid(row=2, column=1)
Message(self.ver, text=self.org_entry.get(), width=100).grid(row=3, column=1)
Message(self.ver, text=self.date_entry.get(), width=100).grid(row=4, column=1)
confirm_button=Button(self.ver, text='Bestätigen', command=self.data_insert).grid(row=4, column=1)
cancle_button=Button(self.ver, text='Abbrechen', command=self.ver.destroy).grid(row=4, column=2)
def data_insert(self):
new_dict = collections.OrderedDict()
new_dict['name'] = self.name_entry.get()
new_dict['adresse'] = self.adr_entry.get()
new_dict['organisation'] = self.org_entry.get()
new_dict['datum'] = self.date_entry.get()
print(new_dict)
self.probe.append(new_dict)
#self.sheet.update_acell('A4',new_dict['name'])
self.update_gsheet()
self.ver.destroy()
self.t.destroy()
def update_gsheet(self):
i = 2
for dic_object in self.probe:
j = 1
for category in dic_object:
self.sheet.update_cell(i,j,dic_object[category])
j += 1
i += 1
def search(self):
print(self.probe)
self.result = []
self.var = self.entry.get() #starting index better
self.search_algo()
self.outputtext.delete('1.0', END)
for dict in self.result:
print(dict['Name'], dict['Adresse'], dict['Organisation'])
self.outputtext.insert(END, dict['Name'] + '\n')
self.outputtext.insert(END, dict['Adresse']+ '\n')
self.outputtext.insert(END, dict['Organisation']+ '\n')
self.outputtext.insert(END, 'Erhalten am '+dict['Datum']+'\n'+'\n')
if not self.result:
self.outputtext.insert(END, 'Name not found')
return FALSE
return TRUE
def search_algo(self):
category = self.v.get()
print(category)
for dict_object in self.probe:
if dict_object[category] == self.var:
self.result.append(dict_object)
``` | 2017/08/29 | [
"https://Stackoverflow.com/questions/45939564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3554329/"
] | I'm not familiar with gspread, which appears to be a third-party client for the Google Sheets API, but it looks like you should be using [`get_all_values`](https://github.com/burnash/gspread#getting-all-values-from-a-worksheet-as-a-list-of-lists) rather than `get_all_records`. That will give you a list of lists, rather than a list of dicts. | Python dictionaries are unordered. There is the [OrderedDict](https://docs.python.org/3.6/library/collections.html#collections.OrderedDict) in collections, but hard to say more about what the best course of action should be without more insight into why you need this dictionary ordered... | 36 |
55,508,830 | In a virtual Env with Python 3.7.2, I am trying to run django's `python manage.py startap myapp` and I get this error:
```
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
django.core.exceptions.ImproperlyConfigured: SQLite 3.8.3 or later is required (found 3.8.2).
```
I'm running Ubuntu Trusty 14.04 Server.
How do I upgrade or update my sqlite version to >=3.8.3?
*I ran*
`$ apt list --installed | grep sqlite`
```
libaprutil1-dbd-sqlite3/trusty,now 1.5.3-1 amd64 [installed,automatic]
libdbd-sqlite3/trusty,now 0.9.0-2ubuntu2 amd64 [installed]
libsqlite3-0/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
libsqlite3-dev/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
python-pysqlite2/trusty,now 2.6.3-3 amd64 [installed]
python-pysqlite2-dbg/trusty,now 2.6.3-3 amd64 [installed]
sqlite3/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
```
*and*
`sudo apt install --only-upgrade libsqlite3-0`
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
libsqlite3-0 is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.
```
EDIT:
the `settings.py` is stock standard:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55508830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6154769/"
] | I've just been through this. I had to install a separate newer version of SQLite, from
<https://www.sqlite.org/download.html>
That is in /usr/local/bin. Then I had to recompile Python, telling it to look there:
```
sudo LD_RUN_PATH=/usr/local/lib ./configure --enable-optimizations
sudo LD_RUN_PATH=/usr/local/lib make altinstall
```
To check which version of SQLite Python is using:
```
$ python
Python 3.7.3 (default, Apr 12 2019, 16:23:13)
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.27.2'
``` | In addition to the above mentioned answers, just in case if you experience this behaviour on Travis CI, add `dist: xenial` directive to fix it. | 37 |
46,143,091 | I'm pretty new to python so it's a basic question.
I have data that I imported from a csv file. Each row reflects a person and his data. Two attributes are Sex and Pclass. I want to add a new column (predictions) that is fully depended on those two in one line. If both attributes' values are 1 it should assign 1 to the person's predictions data field, 0 otherwise.
How do I do it in one line (let's say with Pandas)? | 2017/09/10 | [
"https://Stackoverflow.com/questions/46143091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5252187/"
] | You could try adding a composite index
```
create index test on screenshot (DateTaken, id)
``` | Try running this query:
```
SELECT COUNT(*) as total
FROM screenshot
WHERE DateTaken BETWEEN '2000-05-01' AND '2000-06-10';
```
The reference to `ID` in the `SELECT` could be affecting the use of the index. | 45 |
71,568,396 | We are using a beam multi-language pipeline using python and java(ref <https://beam.apache.org/documentation/sdks/python-multi-language-pipelines/>). We are creating a cross-language pipeline using java. We have some external jar files that required a java library path. Code gets compiled properly and is able to create a jar file. When I run the jar file it creates a Grpc server but when I use the python pipeline to call External transform it is not picking up the java library path it picks the default java library path.
![jni_emdq required library path to overwrite](https://i.stack.imgur.com/N24DB.png)
Tried -Djava.library.path=<path\_to\_dll> while running jar file.
Tried System.setProperty(“java.library.path”, “/path/to/library”).
(Ref <https://examples.javacodegeeks.com/java-library-path-what-is-java-library-and-how-to-use/>)
Tried JvmInitializer of beam to overwrite system property. (Ref <https://examples.javacodegeeks.com/java-library-path-what-is-java-library-and-how-to-use/>)
Tried to pull code beam open source and tried to overwrite system proprty before expansion starts. It overwrite but it is not picking correct java path when calls using python external transform. (ref <https://github.com/apache/beam/blob/master/sdks/java/expansion-service/src/main/java/org/apache/beam/sdk/expansion/service/ExpansionService.java>) | 2022/03/22 | [
"https://Stackoverflow.com/questions/71568396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9648514/"
] | A Worksheet Change Event: Monitor Change in Column's Data
---------------------------------------------------------
* I personally would go with JvdV's suggestion in the comments.
* On each manual change of a cell, e.g. in column `A`, it will check the formula
`=SUM(A2:ALastRow)` in cell `A1` and if it is not correct it will overwrite it with the correct one.
* You can use this for multiple non-adjacent columns e.g. `"A,C:D,E"`.
* Nothing needs to be run. Just copy the code into the appropriate sheet module e.g. `Sheet1` and exit the Visual Basic Editor.
**Sheet Module e.g. `Sheet1` (not Standard Module e.g. `Module1`)**
```
Option Explicit
Private Sub Worksheet_Change(ByVal Target As Range)
UpdateFirstRowFormula Target, "A"
End Sub
Private Sub UpdateFirstRowFormula( _
ByVal Target As Range, _
ByVal ColumnList As String)
On Error GoTo ClearError
Dim ws As Worksheet: Set ws = Target.Worksheet
Dim Cols() As String: Cols = Split(ColumnList, ",")
Application.EnableEvents = False
Dim irg As Range, arg As Range, crg As Range, lCell As Range
Dim n As Long
Dim Formula As String
For n = 0 To UBound(Cols)
With ws.Columns(Cols(n))
With .Resize(.Rows.Count - 1).Offset(1)
Set irg = Intersect(.Cells, Target.EntireColumn)
End With
End With
If Not irg Is Nothing Then
For Each arg In irg.Areas
For Each crg In arg.Columns
Set lCell = crg.Find("*", , xlFormulas, , , xlPrevious)
If Not lCell Is Nothing Then
Formula = "=SUM(" & crg.Cells(1).Address(0, 0) & ":" _
& lCell.Address(0, 0) & ")"
With crg.Cells(1).Offset(-1)
If .Formula <> Formula Then .Formula = Formula
End With
End If
Next crg
Next arg
Set irg = Nothing
End If
Next n
SafeExit:
If Not Application.EnableEvents Then Application.EnableEvents = True
Exit Sub
ClearError:
Debug.Print "Run-time error '" & Err.Number & "': " & Err.Description
Resume SafeExit
End Sub
``` | Use a nested function as below:
=SUM(OFFSET(A2,,,COUNTA(A2:A26))) | 47 |
49,005,651 | This question is motivated by my another question: [How to await in cdef?](https://stackoverflow.com/questions/48989065/how-to-await-in-cdef)
There are tons of articles and blog posts on the web about `asyncio`, but they are all very superficial. I couldn't find any information about how `asyncio` is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
1. Procedure definitions of the form `async def foo(): ...` are actually interpreted as methods of a class inheriting `coroutine`.
2. Perhaps, `async def` is actually split into multiple methods by `await` statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the `await` statement).
In other words, here's my attempt at "desugaring" of some `asyncio` syntax into something more understandable:
```
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
```
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C `open()` procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49005651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691066/"
] | It all boils down to the two main challenges that asyncio is addressing:
* How to perform multiple I/O in a single thread?
* How to implement cooperative multitasking?
The answer to the first point has been around for a long while and is called a [select loop](https://en.wikipedia.org/wiki/Asynchronous_I/O#Select(/poll)_loops). In python, it is implemented in the [selectors module](https://docs.python.org/3/library/selectors.html).
The second question is related to the concept of [coroutine](https://en.wikipedia.org/wiki/Coroutine), i.e. functions that can stop their execution and be restored later on. In python, coroutines are implemented using [generators](https://wiki.python.org/moin/Generators) and the [yield from](https://www.python.org/dev/peps/pep-0380/) statement. That's what is hiding behind the [async/await syntax](https://www.python.org/dev/peps/pep-0492/).
More resources in this [answer](https://stackoverflow.com/a/41208685/2846140).
---
**EDIT:** Addressing your comment about goroutines:
The closest equivalent to a goroutine in asyncio is actually not a coroutine but a task (see the difference in the [documentation](https://docs.python.org/3/library/asyncio-task.html)). In python, a coroutine (or a generator) knows nothing about the concepts of event loop or I/O. It simply is a function that can stop its execution using `yield` while keeping its current state, so it can be restored later on. The `yield from` syntax allows for chaining them in a transparent way.
Now, within an asyncio task, the coroutine at the very bottom of the chain always ends up yielding a [future](https://docs.python.org/3.4/library/asyncio-task.html#asyncio.Future). This future then bubbles up to the event loop, and gets integrated into the inner machinery. When the future is set to done by some other inner callback, the event loop can restore the task by sending the future back into the coroutine chain.
---
**EDIT:** Addressing some of the questions in your post:
>
> How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter?
>
>
>
No, nothing happens in a thread. I/O is always managed by the event loop, mostly through file descriptors. However the registration of those file descriptors is usually hidden by high-level coroutines, making the dirty work for you.
>
> What exactly is meant by I/O? If my python procedure called C open() procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening?
>
>
>
An I/O is any blocking call. In asyncio, all the I/O operations should go through the event loop, because as you said, the event loop has no way to be aware that a blocking call is being performed in some synchronous code. That means you're not supposed to use a synchronous `open` within the context of a coroutine. Instead, use a dedicated library such [aiofiles](https://github.com/Tinche/aiofiles) which provides an asynchronous version of `open`. | If you picture an airport control tower, with many planes waiting to land on the same runway. The control tower can be seen as the event loop and runway as the thread. Each plane is a separate function waiting to execute. In reality only one plane can land on the runway at a time. What asyncio basically does it allows many planes to land simultaneously on the same runway by using the event loop to suspend functions and allow other functions to run when you use the await syntax it basically means that plane(function can be suspended and allow other functions to process | 48 |
36,590,875 | How to obtain absolute path via relative path for 'other' project files, not those python file in the project, like Java?
```
D:\Workspaces\ABCPythonProject\
|- src
| |-- com/abc
| |-- conf.py
| |-- abcd.py
| |-- defg.py
| |-- installation.rst
|- resources
| |-- a.txt
| |-- b.txt
| |-- c.jpg
```
For example, I would like access 'a.txt' or 'b.txt' in python codes like 'abcd.py' in a simple manner with variable like 'resource/a.txt', just like a Java project in Java.
In short, I want to get '**D:\Workspaces\ABCPythonProject\resources\a.txt**' by '**resources\a.txt**', which is extremely easy to do in Java, but is seemingly extremely difficult to achieve in Python.
(If I use the built-in python methods like 'os.filePath.join(os.filePath.dirname(\_file\_\_), 'resources/a.txt')', os.path.dirname('resources/a.txt'), os.path.abspath('resources/a.txt'), ..., etc., the results is always "**D:\Workspaces\ABCPythonProject\com\abc\resources\a.txt**", a non-exit file path. )
How to achieve this? | 2016/04/13 | [
"https://Stackoverflow.com/questions/36590875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1762932/"
] | For images you'll have to use:
```
<img src="url">
``` | It should be in following way,
```
foreach ($pdo->query($sql) as $row) {
echo '<tr>';
echo '<td>'. $row['u_id'] . '</td>';
echo '<td>'. $row['u_role'] . '</td>';
echo '<td>'. $row['u_name'] . '</td>';
echo '<td>'. $row['u_passw'] . '</td>';
echo '<td>'. $row['u_init'] . '</td>';
echo '<td>'. $row['c_name'] . '</td>';
echo '<td>'. $row['u_mail'] . '</td>';
echo '<td>'.'<img src="'. $row['u_pic'] . '" width=45 height=45></img>'.'</td>';
}
``` | 58 |
36,215,958 | I want to filter the moment of a day only with hour and minutes.
For example, a function that return true if now is between the 9.15 and 11.20 of the day.
I tried with datetime but with the minutes is littlebit complicated.
```
#!/usr/bin/python
import datetime
n = datetime.datetime.now()
sta = datetime.time(19,18)
sto = datetime.time(20,19)
if sta.hour <= n.hour and n.hour <= sto.hour:
if sta.minute <= n.minute and sto.minute <= n.minute:
print str(n.hour) + ":" + str(n.minute)
```
What is the best way?
Regards | 2016/03/25 | [
"https://Stackoverflow.com/questions/36215958",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/341022/"
] | You can use tuple comparisons to do any subinterval comparisons pretty easily:
```
>>> def f(dt):
... return (9, 15) <= (dt.hour, dt.minute) < (11, 21)
...
>>> d = datetime.datetime.now()
>>> str(d)
'2016-03-25 09:50:51.782718'
>>> f(d)
True
>>> f(d + datetime.timedelta(hours=2)
False
```
This accepts any datetime that has time between 9:15:00.000000 and 11:20:59.999999.
---
The above method also works if you need to check for example 5 first minutes of each hour; but for the hours of day, it might be simpler to use `.time()` to get the time part of a datetime, then compare this to the limits. The following accepts any time between 9:15:00.000000 and 11:20:00.000000 (inclusive):
```
>>> def f(dt):
... return datetime.time(9, 15) <= dt.time() <= datetime.time(11, 20)
``` | You'll need to use the combine class method:
```
import datetime
def between():
now = datetime.datetime.now()
start = datetime.datetime.combine(now.date(), datetime.time(9, 15))
end = datetime.datetime.combine(now.date(), datetime.time(11, 20))
return start <= now < end
``` | 61 |
5,965,655 | I'm trying to build a web interface for some python scripts. The thing is I have to use PHP (and not CGI) and some of the scripts I execute take quite some time to finish: 5-10 minutes. Is it possible for PHP to communicate with the scripts and display some sort of progress status? This should allow the user to use the webpage as the task runs and display some status in the meantime or just a message when it's done.
Currently using exec() and on completion I process the output. The server is running on a Windows machine, so pcntl\_fork will not work.
**LATER EDIT**:
Using another php script to feed the main page information using ajax doesn't seem to work because the server kills it (it reaches max execution time, and I don't really want to increase this unless necessary)
I was thinking about socket based communication but I don't see how is this useful in my case (some hints, maybe?
Thank you | 2011/05/11 | [
"https://Stackoverflow.com/questions/5965655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/748676/"
] | You want *inter-process communication*. Sockets are the first thing that comes to mind; you'd need to set up a socket to *listen* for a connection (on the same machine) in PHP and set up a socket to *connect* to the listening socket in Python and *send* it its status.
Have a look at [this socket programming overview](http://docs.python.org/howto/sockets.html) from the Python documentation and [the Python `socket` module's documentation (especially the examples at the end)](http://docs.python.org/library/socket.html). I'm sure PHP has similar resources.
Once you've got an more specific idea of what you want to build and need help, feel free to ask a *new* question on StackOverflow (if it isn't already answered). | I think you would have to use a meta refresh and maybe have the python write the status to a file and then have the php read from it.
You could use AJAX as well to make it more dynamic.
Also, probably shouldn't use exec()...that opens up a world of vulnerabilities. | 62 |
31,480,921 | I can't seem to get the interactive tooltips powered by mpld3 to work with the fantastic lmplot-like scatter plots from seaborn.
I'd love any pointer on how to get this to work! Thanks!
Example Code:
```
# I'm running this in an ipython notebook.
%matplotlib inline
import matplotlib.pyplot as plt, mpld3
mpld3.enable_notebook()
import seaborn as sns
N=10
data = pd.DataFrame({"x": np.random.randn(N),
"y": np.random.randn(N),
"size": np.random.randint(20,200, size=N),
"label": np.arange(N)
})
scatter_sns = sns.lmplot("x", "y",
scatter_kws={"s": data["size"]},
robust=False, # slow if true
data=data, size=8)
fig = plt.gcf()
tooltip = mpld3.plugins.PointLabelTooltip(fig, labels=list(data.label))
mpld3.plugins.connect(fig, tooltip)
mpld3.display(fig)
```
I'm getting the seaborn plot along with the following error:
```
Javascript error adding output!
TypeError: obj.elements is not a function
See your browser Javascript console for more details.
```
The console shows:
```
TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:319 Javascript error adding output! TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:338 TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:319 Javascript error adding output! TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
``` | 2015/07/17 | [
"https://Stackoverflow.com/questions/31480921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1270151/"
] | I don't think that there is an easy way to do this currently. I can get some of the tooltips to show by replacing your `tooltip` constructor with the following:
```
ax = plt.gca()
pts = ax.get_children()[3]
tooltip = mpld3.plugins.PointLabelTooltip(pts, labels=list(data.label))
```
This only works for the points outside of the uncertainty interval, though. I think it would be possible to extend `seaborn` to make these points highest in the `zorder` and store them in in the instance somewhere so that you don't need do pull them out of the axis children list. Perhaps worth a feature request. | Your code works for me on `ipython` (no notepad) when saving the figure to file with `mpld3.save_html(fig,"./out.html")`. May be an issue with `ipython` `notepad`/`mpld3` compatibility or `mpld3.display` (which causes an error for me, although I think this is related to an old version of matplotlib on my computer).
The full code which worked for me is,
```
import numpy as np
import matplotlib.pyplot as plt, mpld3
import seaborn as sns
import pandas as pd
N=10
data = pd.DataFrame({"x": np.random.randn(N),
"y": np.random.randn(N),
"size": np.random.randint(20,200, size=N),
"label": np.arange(N)
})
scatter_sns = sns.lmplot("x", "y",
scatter_kws={"s": data["size"]},
robust=False, # slow if true
data=data, size=8)
fig = plt.gcf()
tooltip = mpld3.plugins.PointLabelTooltip(fig, labels=list(data.label))
mpld3.plugins.connect(fig, tooltip)
mpld3.save_html(fig,"./out.html")
``` | 68 |
28,180,252 | I am trying to create a quiver plot from a NetCDF file in Python using this code:
```
import matplotlib.pyplot as plt
import numpy as np
import netCDF4 as Dataset
ncfile = netCDF4.Dataset('30JUNE2012_0300UTC.cdf', 'r')
dbZ = ncfile.variables['MAXDBZF']
data = dbZ[0,0]
U = ncfile.variables['UNEW'][:]
V = ncfile.variables['VNEW'][:]
x, y= np.arange(0,2*np.pi,.2), np.arange(0,2*np.pi,.2)
X,Y = np.meshgrid(x,y)
plt.quiver(X,Y,U,V)
plt.show()
```
and I am getting the following errors
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-109-b449c540a7ea> in <module>()
11 X,Y = np.meshgrid(x,y)
12
---> 13 plt.quiver(X,Y,U,V)
14
15 plt.show()
/Users/felishalawrence/anaconda/lib/python2.7/site-packages/matplotlib/pyplot.pyc in quiver(*args, **kw)
3152 ax.hold(hold)
3153 try:
-> 3154 ret = ax.quiver(*args, **kw)
3155 draw_if_interactive()
3156 finally:
/Users/felishalawrence/anaconda/lib/python2.7/site-packages/matplotlib/axes/_axes.pyc in quiver(self, *args, **kw)
4162 if not self._hold:
4163 self.cla()
-> 4164 q = mquiver.Quiver(self, *args, **kw)
4165
4166 self.add_collection(q, autolim=True)
/Users/felishalawrence/anaconda/lib/python2.7/site-packages/matplotlib/quiver.pyc in __init__(self, ax, *args, **kw)
415 """
416 self.ax = ax
--> 417 X, Y, U, V, C = _parse_args(*args)
418 self.X = X
419 self.Y = Y
/Users/felishalawrence/anaconda/lib/python2.7/site-packages/matplotlib/quiver.pyc in _parse_args(*args)
377 nr, nc = 1, U.shape[0]
378 else:
--> 379 nr, nc = U.shape
380 if len(args) == 2: # remaining after removing U,V,C
381 X, Y = [np.array(a).ravel() for a in args]
ValueError: too many values to unpack
```
What does this error mean? | 2015/01/27 | [
"https://Stackoverflow.com/questions/28180252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4500459/"
] | `ValueError: too many values to unpack` is because the line `379` of your program is trying to assign two variables (`nr`, `nc`) from `U.shape` when there are not enough variables to assign these values to.
Look above on line `377` - that is correctly assigning two values (`1` and `U.shape[0]` to `nr` and `nc` but line `379` has only a `U.shape` object to assign to two variables. If there are more than 2 values in `U.shape` you will get this error. It was made clear that `U.shape` is actually a tuple with at least two values which means that this code would work as-is as long as there are an equal amount of values to assign to the variables (in this case two). I would print out the value of `U.shape` and determine that it holds the expected values and quantity of values. If you `U.shape` can return two or more values then your code will need to learn how to adapt to this. For example if you find that `U.shape` is a tuple of 3 values then you will need 3 variables to hold those values like so:
`nr, nc, blah = U.shape`
Consider the following:
```
a,b,c = ["a","b","c"] #works
print a
print b
print c
a, b = ["a","b","c"] #will result in error because 3 values are trying to be assigned to only 2 variables
```
The results from the above code:
```
a
b
c
Traceback (most recent call last):
File "None", line 7, in <module>
ValueError: too many values to unpack
```
So you see it's just a matter of having enough values to assign to all of the variables that are requesting a value. | Probably more useful to solve future problems rather then author's but still:
The problem was likely that the netcdf file had a time dimension, therefore U and V where 3 dimensional arrays - you should choose the time slice or aggregate the data across the time dimension. | 71 |
36,486,120 | I'm trying to centre and normalise a data set in python with the following code
```
mean = np.mean(train, axis=0)
std = np.std(train, axis=0)
norm_train = (train - mean) / std
```
The problem is that I get a devision by zero error. Two of the values in the data set end up having a zero std. The data set if of shape (3750, 55). My stats skills are not so strong so I'm not sure how to overcome this. Any suggestions? | 2016/04/07 | [
"https://Stackoverflow.com/questions/36486120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/531296/"
] | Since the [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) is calculated by taking the sum of the *squared* deviations from the mean, a zero standard deviation can only be possible when all the values of a variable are the same (all equal to the mean). In this case, those variables have no discriminative power so they can be removed from the analysis. They cannot improve any classification, clustering or regression task. Many implementations will do it for you or throw an error about a matrix calculation. | You could just replace the 0 std to 1 for that feature. This would basically mean that the scaled value would be zero for all the data points for that feature. This makes sense as this implies that the feature values do not deviate even a bit form the mean(as the values is constant, the constant is the mean.)
FYI- This is what sklearn does!
<https://github.com/scikit-learn/scikit-learn/blob/7389dbac82d362f296dc2746f10e43ffa1615660/sklearn/preprocessing/data.py#L70> | 72 |
3,950,368 | >
> **Possible Duplicate:**
>
> [What do I use for a max-heap implementation in Python?](https://stackoverflow.com/questions/2501457/what-do-i-use-for-a-max-heap-implementation-in-python)
>
>
>
Python has a min heap implemented in the heapq module. However, if one would want a max heap, would one have to build from scratch? | 2010/10/16 | [
"https://Stackoverflow.com/questions/3950368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/475790/"
] | You could multiply your numbers by -1 and use the min heap. | No need to implement a max heap from scratch. You can easily employ a bit of math to turn your min heap into a max heap!
See [this](http://www.mail-archive.com/python-list@python.org/msg238926.html) and [this](http://code.activestate.com/recipes/502295/) - but really [this SO answer](https://stackoverflow.com/questions/2501457/what-do-i-use-for-a-max-heap-implementation-in-python). | 77 |
55,522,649 | I have installed numpy but when I import it, it doesn't work.
```
from numpy import *
arr=array([1,2,3,4])
print(arr)
```
Result:
```
C:\Users\YUVRAJ\PycharmProjects\mycode2\venv\Scripts\python.exe C:/Users/YUVRAJ/PycharmProjects/mycode2/numpy.py
Traceback (most recent call last):
File "C:/Users/YUVRAJ/PycharmProjects/mycode2/numpy.py", line 1, in <module>
from numpy import *
File "C:\Users\YUVRAJ\PycharmProjects\mycode2\numpy.py", line 2, in <module>
x=array([1,2,3,4])
NameError: name 'array' is not defined
Process finished with exit code 1
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55522649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11313285/"
] | The problem is you named your script as `numpy.py`, which is a conflict with the module numpy that you need to use. Just rename your script to something else and will be fine. | Instead of using `from numpy import *`
Try using this:
```
import numpy
from numpy import array
```
And then add your code:
```
arr=array([1,2,3,4])
print(arr)
```
---
**EDIT:** Even though this is the accepted answer, this may not work under all circumstances. If this doesn't work, see [adrtam's answer](https://stackoverflow.com/a/55522733/5721784). | 78 |
24,703,432 | I am attempting to catch messages by topic by using the message\_callback\_add() function in [this library](https://pypi.python.org/pypi/paho-mqtt#usage-and-api). Below is my entire module that I am using to deal with my mqtt subscribe and publishing needs. I have been able to test that the publish works, but I can't seem to catch any incoming messages. There are no warnings/errors of any kind and the `print("position")` statements are working for 1 and 2 only.
```
import sys
import os
import time
import Things
import paho.mqtt.client as paho
global mqttclient;
global broker;
global port;
broker = "10.64.16.199";
port = 1883;
mypid = os.getpid()
client_uniq = "pubclient_"+str(mypid)
mqttclient = paho.Client(client_uniq, False) #nocleanstart
mqttclient.connect(broker, port, 60)
mqttclient.subscribe("Commands/#")
def Pump_callback(client, userdata, message):
#print("Received message '" + str(message.payload) + "' on topic '"
# + message.topic + "' with QoS " + str(message.qos))
print("position 3")
Things.set_waterPumpSpeed(int(message.payload))
def Valve_callback(client, userdata, message):
#print("Received message '" + str(message.payload) + "' on topic '"
# + message.topic + "' with QoS " + str(message.qos))
print("position 4")
Things.set_valvePosition(int(message.payload))
mqttclient.message_callback_add("Commands/PumpSpeed", Pump_callback)
mqttclient.message_callback_add("Commands/ValvePosition", Valve_callback)
print("position 1")
mqttclient.loop_start()
print("position 2")
def pub(topic, value):
mqttclient.publish(topic, value, 0, True)
``` | 2014/07/11 | [
"https://Stackoverflow.com/questions/24703432",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2851048/"
] | I called `loop_start` in the wrong place.
I moved the call to right after the connect statement and it now works.
Here is the snippet:
```
client_uniq = "pubclient_"+str(mypid)
mqttclient = paho.Client(client_uniq, False) #nocleanstart
mqttclient.connect(broker, port, 60)
mqttclient.loop_start()
mqttclient.subscribe("FM_WaterPump/Commands/#")
```
In the documentation on loop\_start it alludes to calling `loop_start()` after or before connect though it should say immediately before or after to clarify.
Snippet of the documentation:
>
> These functions implement a threaded interface to the network loop. Calling loop\_start() once, before or after connect\*(), runs a thread in the background to call loop() automatically. This frees up the main thread for other work that may be blocking. This call also handles reconnecting to the broker. Call loop\_stop() to stop the background thread.
>
>
> | `loop_start()` will return immediately, so your program will quit before it gets chance to do anything.
You've also called `subscribe()` before `message_callback_add()` which doesn't make sense, although in this specific example it probably doesn't matter. | 79 |
23,190,348 | Has the alsaaudio library been ported to python3? i have this working on python 2.7 but not on python 3.
is there another library for python 3 if the above cannot be used? | 2014/04/21 | [
"https://Stackoverflow.com/questions/23190348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/612242/"
] | I have compiled alsaaudio for python3 manually.
You can install it by following the steps given below.
1. Make sure that **gcc, python3-dev, libasound2-dev** packages are installed in your machine (install them using synaptic if you are using Ubuntu).
2. Download and extract the following package
<http://sourceforge.net/projects/pyalsaaudio/files/pyalsaaudio-0.7.tar.gz/download>
3. Go to the extracted folder and execute the following commands (Execute the commands as root or use sudo)
```
python3 setup.py build
python3 setup.py install
```
HTH.. | It's now called pyalsaaudio.
For me pip install pyalsaaudio worked. | 80 |
66,929,254 | Is there a library for interpreting python code within a python program?
Sample usage might look like this..
```
code = """
def hello():
return 'hello'
hello()
"""
output = Interpreter.run(code)
print(output)
```
which then outputs
`hello` | 2021/04/03 | [
"https://Stackoverflow.com/questions/66929254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12594122/"
] | found this example from grepper
```
the_code = '''
a = 1
b = 2
return_me = a + b
'''
loc = {}
exec(the_code, globals(), loc)
return_workaround = loc['return_me']
print(return_workaround)
```
apparently you can pass global and local scope into `exec`. In your use case, you would just use a named variable instead of returning. | You can use the `exec` function. You can't get the return value from the code variable. Instead you can print it there itself.
```
code = """
def hello():
print('hello')
hello()
"""
exec(code)
``` | 81 |
65,697,374 | So I am a beginner at python, and I was trying to install packages using pip. But any time I try to install I keep getting the error:
>
> ERROR: Could not install packages due to an EnvironmentError: [WinError 2] The system cannot find the file specified: 'c:\python38\Scripts\sqlformat.exe' -> 'c:\python38\Scripts\sqlformat.exe.deleteme'
>
>
>
How do I fix this? | 2021/01/13 | [
"https://Stackoverflow.com/questions/65697374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14996295/"
] | Try running command line as administrator. The issue looks like its about permission. To run as administrator. Type cmd on search bar and right click on icon of command prompt. There you will find an option of run as administrator. Click the option and then try to install package | Looks like a permissions error. You might try starting the installation with admin rights or install the package only for your current user with:
```
pip install --user package
``` | 82 |
59,662,028 | I am trying to retrieve app related information from Google Play store using selenium and BeautifulSoup. When I try to retrieve the information, I got webdriver exception error. I checked the chrome version and chrome driver version (both are compatible). Here is the weblink that is causing the issue, code to retrieve information, and error thrown by the code:
Link: <https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true>
Code:
```
driver = webdriver.Chrome('path')
driver.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
soup = bs.BeautifulSoup(driver.page_source, 'html.parser')
```
I am getting error on third line. Here is the parts of the error message:
Start of the error message:
```
---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
<ipython-input-280-4e8a1ef443f2> in <module>()
----> 1 soup = bs.BeautifulSoup(driver.page_source, 'html.parser')
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py in page_source(self)
676 driver.page_source
677 """
--> 678 return self.execute(Command.GET_PAGE_SOURCE)['value']
679
680 def close(self):
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py in execute(self, driver_command, params)
318 response = self.command_executor.execute(driver_command, params)
319 if response:
--> 320 self.error_handler.check_response(response)
321 response['value'] = self._unwrap_value(
322 response.get('value', None))
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
240 alert_text = value['alert'].get('text')
241 raise exception_class(message, screen, stacktrace, alert_text)
--> 242 raise exception_class(message, screen, stacktrace)
243
244 def _value_or_default(self, obj, key, default):
WebDriverException: Message: unknown error: bad inspector message:
```
End of the error message:
```
(Session info: chrome=79.0.3945.117)
```
Could anyone guide me how to fix the issue? | 2020/01/09 | [
"https://Stackoverflow.com/questions/59662028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2293224/"
] | I think this is due to the chromedriver encoding problem.
See <https://bugs.chromium.org/p/chromium/issues/detail?id=723592#c9> for additional information about this bug.
Instead of selenium you can get page source using BeautifulSoup as follows.
```
import requests
from bs4 import BeautifulSoup
r = requests.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
soup = BeautifulSoup(r.content, "lxml")
print(soup)
``` | try this
```
driver = webdriver.Chrome('path')
driver.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
# retrieve data you want, for example
review_user_list = driver.find_elements_by_class_name("X43Kjb")
``` | 84 |
36,781,198 | I'm sending an integer from python using pySerial.
```
import serial
ser = serial.Serial('/dev/cu.usbmodem1421', 9600);
ser.write(b'5');
```
When i compile,the receiver LED on arduino blinks.However I want to cross check if the integer is received by arduino. I cannot use Serial.println() because the port is busy. I cannot run serial monitor first on arduino and then run the python script because the port is busy. How can i achieve this? | 2016/04/21 | [
"https://Stackoverflow.com/questions/36781198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6237876/"
] | A simple way to do it using the standard library :
```
import java.util.Scanner;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.ThreadPoolExecutor;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
public class Example {
private static final int POOL_SIZE = 5;
private static final ExecutorService WORKERS = new ThreadPoolExecutor(POOL_SIZE, POOL_SIZE, 1, MILLISECONDS, new LinkedBlockingDeque<>());
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
while (true) {
System.out.print("> ");
String cmd = sc.nextLine();
switch (cmd) {
case "process":
WORKERS.submit(newExpensiveTask());
break;
case "kill":
System.exit(0);
default:
System.err.println("Unrecognized command: " + cmd);
}
}
}
private static Runnable newExpensiveTask() {
return () -> {
try {
Thread.sleep(10000);
System.out.println("Done processing");
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
};
}
}
```
This code lets you run heavy tasks asynchronously while the user terminal remains available and reactive. | I would recommend reading up on specific tutorials, such as the Java Language Tutorial (available as a book - at least, it used to be - as well as on the Java website <https://docs.oracle.com/javase/tutorial/essential/concurrency/>)
However as others have cautioned, getting into threading is a challenge and requires good knowledge of the language quite apart from the aspects of multithreading and synchronization. I'd be tempted to recommend you read some of the other tutorials - working through IO and so on - first of all. | 87 |
34,685,486 | After installing my python project with `setup.py` and executing it in terminal I get the following error:
```
...
from ui.mainwindow import MainWindow
File "/usr/local/lib/python2.7/dist-packages/EpiPy-0.1-py2.7.egg/epipy/ui/mainwindow.py", line 9, in <module>
from model.sir import SIR
ImportError: No module named model.sir
```
...
We assume we have the following structure of our project `cookies`:
```
.
├── setup.py
└── src
├── a
│ ├── aa.py
│ └── __init__.py
├── b
│ ├── bb.py
│ └── __init__.py
├── __init__.py
└── main.py
```
File: `cookies/src/main.py`
```
from a import aa
def main():
print aa.get_aa()
```
File `cookies/src/a/aa.py`
```
from b import bb
def get_aa():
return bb.get_bb()
```
File: `cookies/src/b/bb.py`
```
def get_bb():
return 'bb'
```
File: `cookies/setup.py`
```
#!/usr/bin/env python
import os
import sys
try:
from setuptools import setup, find_packages
except ImportError:
raise ImportError("Install setup tools")
setup(
name = "cookies",
version = "0.1",
author = "sam",
description = ("test"),
license = "MIT",
keywords = "test",
url = "asd@ads.asd",
packages=find_packages(),
classifiers=[
"""\
Development Status :: 3 - Alpha
Operating System :: Unix
"""
],
entry_points = {'console_scripts': ['cookies = src.main:main',],},
)
```
If I install `cookies` as `root` with `$ python setup.py install` and execute `cookies` I get the following error: `ImportError: No module named b`. How can I solve the problem. | 2016/01/08 | [
"https://Stackoverflow.com/questions/34685486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2609713/"
] | What I would do is to use absolute imports everywhere (from epipy import ...). That's what is recommanded in [PEP 328](https://docs.python.org/2.5/whatsnew/pep-328.html).
Your imports won't work anymore if the project is not installed. You can add the project directory to your PYTHONPATH, install the package, or, what I do when I'm in the middle of developing packages, [install with the 'editable' option](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs) : `pip install -e`
In editable mode, instead of installing the package code in your python distribution, a pointer to your project is created. That way it is importable, but the package uses the live code in development.
Example:
I am developing a package in /home/jbchouinard/mypackage. Inside my code, I use absolute imports, e.g. `from mypackage import subpackage`.
If I install with `pip install`, the package will be installed in my distribution, let's say in /usr/lib/python2.7/dist-packages. If I make further changes to the package, I have to upgrade or uninstall/reinstall the package. This can get tedious quickly.
If I install with `pip install -e`, a pointer (a .pth file) is created in /usr/lib/python2.7/dist-packages towards /home/jbchouinard/mypackage. I can `import mypackage` as if it was installed normally, but the code used is the code at /home/jbchouinard/mypackage; any change is reflected immediately. | I had a similar issue with one of my projects.
I've been able to solve my issue by adding this line at the start of my module (before all imports besides sys & os, which are required for this insert), so that it would include the parent folder and by that it will be able to see the parent folder (turns out it doesn't do that by default):
```
import sys
import os
sys.path.insert(1, os.path.join(sys.path[0], '..'))
# all other imports go here...
```
This way, your main.py will include the parent folder (epipy).
Give that a try, hope this helps :-) | 88 |
42,968,543 | I have a file displayed as follows. I want to delete the lines start from `>rev_` until the next line with `>`, not delete the `>` line. I want a python code to realize it.
input file:
```
>name1
fgrsagrhshsjtdkj
jfsdljgagdahdrah
gsag
>rev_name1 # delete from here
jfdsfjdlsgrgagrehdsah
fsagasfd # until here
>name2
jfosajgreajljioesfg
fjsdsagjljljlj
>rev_name2 # delete from here
jflsajgljkop
ljljasffdsa # until here
>name3
.......
```
output file:
```
>name1
fgrsagrhshsjtdkj
jfsdljgagdahdrah
gsag
>name2
jfosajgreajljioesfg
fjsdsagjljljlj
>name3
.......
```
My code is as follows, but it can not work.
```
mark = {}
with open("human.fasta") as inf, open("human_norev.fasta",'w') as outf:
for line in inf:
if line[0:5] == '>rev_':
mark[line] = 1
elif line[0] == '>':
mark[line] = 0
if mark[line] == 0:
outf.write(line)
``` | 2017/03/23 | [
"https://Stackoverflow.com/questions/42968543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4672728/"
] | I'd recommend at least trying to come up with a solution on your own before asking us on here. Ask yourself questions regarding what different ways I can work towards a solution, will parsing character by character/line by line/regex be sufficient for this problem.
But in this case since determining when to start and stop removing lines was always at the start of the line it made sense to just go line by line and check the starting few characters.
```
i = """>name1
fgrsagrhshsjtdkj
jfsdljgagdahdrah
gsag
>rev_name1 # delete from here
jfdsfjdlsgrgagrehdsah
fsagasfd # until here
>name2
jfosajgreajljioesfg
fjsdsagjljljlj
>rev_name2 # delete from here"""
final_string = ""
keep_line = True
for line in i.split('\n'):
if line[0:5] == ">rev_":
keep_line = False
elif line[0] == '>':
keep_line = True
if keep_line:
final_string += line + '\n'
print(final_string)
```
If you wanted the lines to just go directly to console you could remove the print at the end and replace `final_string += line + '\n'` with a `print(line)`. | The code also can be as follows:
```
with open("human.fasta") as inf, open("human_norev.fasta",'w') as outf:
del_start = False
for line in inf:
if line.startswith('>rev_'):
del_start = True
elif line.startswith('>'):
del_start = False
if not del_start:
outf.write(line)
``` | 89 |
49,396,554 | Okay, so I have the following issue. I have a Mac, so the the default Python 2.7 is installed for the OS's use. However, I also have Python 3.6 installed, and I want to install a package using Pip that is only compatible with python version 3. How can I install a package with Python 3 and not 2? | 2018/03/21 | [
"https://Stackoverflow.com/questions/49396554",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9525828/"
] | To download use
```
pip3 install package
```
and to run the file
```
python3 file.py
``` | Why do you ask such a thing here?
<https://docs.python.org/3/using/mac.html>
>
> 4.3. Installing Additional Python Packages
> There are several methods to install additional Python packages:
>
>
> Packages can be installed via the standard Python distutils mode (python setup.py install).
> Many packages can also be installed via the setuptools extension or pip wrapper, see <https://pip.pypa.io/>.
>
>
>
<https://pip.pypa.io/en/stable/user_guide/#installing-packages>
>
> Installing Packages
> pip supports installing from PyPI, version control, local projects, and directly from distribution files.
>
>
> The most common scenario is to install from PyPI using Requirement Specifiers
>
>
> `$ pip install SomePackage` # latest version
> `$ pip install SomePackage==1.0.4` # specific version
> `$ pip install 'SomePackage>=1.0.4'` # minimum version
> For more information and examples, see the pip install reference.
>
>
> | 91 |
57,754,497 | So I think tensorflow.keras and the independant keras packages are in conflict and I can't load my model, which I have made with transfer learning.
Import in the CNN ipynb:
```
!pip install tensorflow-gpu==2.0.0b1
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
```
Loading this pretrained model
```
base_model = keras.applications.xception.Xception(weights="imagenet",
include_top=False)
avg = keras.layers.GlobalAveragePooling2D()(base_model.output)
output = keras.layers.Dense(n_classes, activation="softmax")(avg)
model = keras.models.Model(inputs=base_model.input, outputs=output)
```
Saving with:
```
model.save('Leavesnet Model 2.h5')
```
Then in the new ipynb for the already trained model (the imports are the same as in the CNN ipynb:
```
from keras.models import load_model
model =load_model('Leavesnet Model.h5')
```
I get the error:
```
AttributeError Traceback (most recent call last)
<ipython-input-4-77ca5a1f5f24> in <module>()
2 from keras.models import load_model
3
----> 4 model =load_model('Leavesnet Model.h5')
13 frames
/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in placeholder(shape, ndim, dtype, sparse, name)
539 x = tf.sparse_placeholder(dtype, shape=shape, name=name)
540 else:
--> 541 x = tf.placeholder(dtype, shape=shape, name=name)
542 x._keras_shape = shape
543 x._uses_learning_phase = False
AttributeError: module 'tensorflow' has no attribute 'placeholder'
```
I think there might be a conflict between tf.keras and the independant keras, can someone help me out? | 2019/09/02 | [
"https://Stackoverflow.com/questions/57754497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10780811/"
] | Yes, there is a conflict between `tf.keras` and `keras` packages, you trained the model using `tf.keras` but then you are loading it with the `keras` package. That is not supported, you should use only one version of this package.
The specific problem is that you are using TensorFlow 2.0, but the standalone `keras` package does not support TensorFlow 2.0 yet. | Try to replace
```
from keras.models import load_model
model =load_model('Leavesnet Model.h5')
```
with
`model = tf.keras.models.load_model(model_path)`
It works for me, and I am using:
tensorflow version: 2.0.0
keras version: 2.3.1
You can check the following:
<https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model?version=stable> | 93 |
66,196,791 | So take a triangle formatted as a nested list.
e.g.
```
t = [[5],[3, 6],[8, 14, 7],[4, 9, 2, 0],[9, 11, 5, 2, 9],[1, 3, 8, 5, 3, 2]]
```
and define a path to be the sum of elements from each row of the triangle,
moving 1 to the left or right as you go down rows. Or in python
the second index either stays the same or we add 1 to it.
```
a_path = [t[0][0],[t[1][1]],t[2][1],t[3][1],t[4][2],t[5][3]] = [5, 6, 14, 9, 5,5] is valid
not_a_path = [t[0][0],[t[1][0]],t[2][2],t[3][1],t[4][0],t[5][4]] = [5, 3, 7, 9, 9, 3] is not valid
```
For a triangle as small as this example this can obviously be done via brute force.
I wrote a function like that, for a 20 row triangle it takes about 1 minuite.
I need a function that can do this for a 100 row triangle.
I found this code on <https://rosettacode.org/wiki/Maximum_triangle_path_sum#zkl> and it agrees with all the results my terrible function outputs for small triangles I've tried, and using %time in the console it can do the 100 line triangle in 0 ns so relatively quick.
```
def maxPathSum(rows):
return reduce(
lambda xs, ys: [
a + max(b, c) for (a, b, c) in zip(ys, xs, xs[1:])
],
reversed(rows[:-1]), rows[-1]
)
```
So I started taking bits of this, and using print statements and the console to work out what it was doing. I get that `reversed(rows[:-1]), rows[-1]` is reversing the triangle so that we can iterate from all possible final values on the last row through the sums of their possible paths to get to that value, and that as a,b,c iterate: a is a number from the bottom row, b is the second from bottom row, c is the third from bottom row. And as they iterate I think `a + max(b,c)` seems to sum a with the greatest number on b or c, but when I try to find the max of either two lists or a nested list in the console the list returned seems completely arbitrary.
```
ys = t[-1]
xs = list(reversed(t[:-1]))
for (a, b, c) in zip(ys, xs, xs[1:]):
print(b)
print(c)
print(max(b,c))
print("")
```
prints
```
[9, 11, 5, 2, 9]
[4, 9, 2, 0]
[9, 11, 5, 2, 9]
[4, 9, 2, 0]
[8, 14, 7]
[8, 14, 7]
[8, 14, 7]
[3, 6]
[8, 14, 7]
[3, 6]
[5]
[5]
```
If max(b,c) returned the list containing max(max(b),max(c)) then b = [3, 6], c = [5] would return b, so not that. If max(b,c) returned the list with the greatest sum, max(sum(b),sum(c)), then the same example contradicts it. It doesn't return the list containg minimum value or the one with the greatest mean, so my only guess is that the fact that I set `xs = list(reversed(t[:-1]))` is the problem and that it works fine if its an iterator inside the lambda function but not in console.
Also trying to find `a + max (b,c)` gives me this error, which makes sense.
```
TypeError: unsupported operand type(s) for +: 'int' and 'list'
```
My best guess is again that the different definition of xs as a list is the problem. If true I would like to know how this all works in the context of being iterators in the lambda function. I think I get what reduce() and zip() are doing, so mostly just the lambda function is what's confusing me.
Thanks in advance for any help | 2021/02/14 | [
"https://Stackoverflow.com/questions/66196791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15208320/"
] | We can simplify the expression a bit by including all the rows in the second argument to reduce - there's no reason to pass the last row as third parameter (the starting value) of `reduce`.
Then, it really helps to give your variables meaningful names, which the original code badly fails to do.
So, this becomes:
```
from functools import reduce
def maxPathSum(rows):
return reduce(
lambda sums, upper_row: [cell + max(sum_left, sum_right)
for (cell, sum_left, sum_right)
in zip(upper_row, sums, sums[1:])],
reversed(rows)
)
```
On the first iteration, `sums` will be the last row, and `upper_row` the one over it.
The lambda will calculate the best possible sums by adding each value of the upper row with the largest value of `sums` to its left or right.
It zips the upper row with the sums (the last sum won't be used, as there is one too much), and the sums shifted by one value. So, zip will provide us with a triplet (value from upper row (`cell`), sum underneath to its left (`sum_left`), sum underneath to its right (`sum_right`). The best possible sum at this point is our current cell + the largest of theses sums.
The lambda returns this new row of sums, which will be used as the first parameter of reduce (`sums`) on the next iteration, while `upper_row` becomes the next row in `reversed(rows)`.
In the end, `reduce` returns the last row of sums, which contains only one value, our best possible total:
```
[53]
``` | you can spell out the lambda function so it can print. does this help you understand?
```
t = [[5],[3, 6],[8, 14, 7],[4, 9, 2, 0],[9, 11, 5, 2, 9],[1, 3, 8, 5, 3, 2]]
def g( xs, ys):
ans=[a + max(b, c) for (a, b, c) in zip(ys, xs, xs[1:])]
print(ans)
return ans
def maxPathSum(rows):
return reduce(
g,
reversed(rows[:-1]), rows[-1]
)
maxPathSum(t)
``` | 96 |
2,291,176 | I need to arrange some kind of encrpytion for generating user specific links. Users will be clicking this link and at some other view, related link with the crypted string will be decrypted and result will be returned.
For this, I need some kind of encryption function that consumes a number(or a string) that is the primary key of my selected item that is bound to the user account, also consuming some kind of seed and generating encryption code that will be decrypted at some other page.
so something like this
```
my_items_pk = 36 #primary key of an item
seed = "rsdjk324j23423j4j2" #some string for crypting
encrypted_string = encrypt(my_items_pk,seed)
#generates some crypted string such as "dsaj2j213jasas452k41k"
and at another page:
decrypt_input = encrypt(decypt,seed)
print decrypt_input
#gives 36
```
I want my "seed" to be some kind of primary variable (not some class) for this purpose (ie some number or string).
How can I achieve this under python and django ? | 2010/02/18 | [
"https://Stackoverflow.com/questions/2291176",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/151937/"
] | There are no encryption algorithms, per se, built in to Python. However, you might want to look at the [Python Cryptography Toolkit](http://www.dlitz.net/software/pycrypto/) (PyCrypt). I've only tinkered with it, but it's referenced in Python's documentation on [cryptographic services](http://docs.python.org/library/crypto.html). Here's an example of how you could encrypt a string with AES using PyCrypt:
```
from Crypto.Cipher import AES
from urllib import quote
# Note that for AES the key length must be either 16, 24, or 32 bytes
encryption_obj = AES.new('abcdefghijklmnop')
plain = "Testing"
# The plaintext must be a multiple of 16 bytes (for AES), so here we pad it
# with spaces if necessary.
mismatch = len(plain) % 16
if mismatch != 0:
padding = (16 - mismatch) * ' '
plain += padding
ciph = encryption_obj.encrypt(plain)
# Finally, to make the encrypted string safe to use in a URL we quote it
quoted_ciph = quote(ciph)
```
You would then make this part of your URL, perhaps as part of a GET request.
To decrypt, just reverse the process; assuming that `encryption_obj` is created as above, and that you've retrieved the relevant part of the URL, this would do it:
```
from urllib import unquote
# We've already created encryption_object as shown above
ciph = unquote(quoted_ciph)
plain = encryption_obj.decrypt(ciph)
```
You also might consider a different approach: one simple method would be to hash the primary key (with a salt, if you wish) and store the hash and pk in your database. Give the user the hash as part of their link, and when they return and present the hash, look up the corresponding pk and return the appropriate object. (If you want to go this route, check out the built-in library [hashlib](http://docs.python.org/library/hashlib.html#module-hashlib).)
As an example, you'd have something like this defined in models.py:
```
class Pk_lookup(models.Model):
# since we're using sha256, set the max_length of this field to 32
hashed_pk = models.CharField(primary_key=True, max_length=32)
key = models.IntegerField()
```
And you'd generate the hash in a view using something like the following:
```
import hashlib
import Pk_lookup
hash = hashlib.sha256()
hash.update(str(pk)) # pk has been defined previously
pk_digest = hash.digest()
lookup = Pk_lookup(hashed_pk=pk_digest,key=pk)
lookup.save()
```
Note that you'd have to quote this version as well; if you prefer, you can use `hexdigest()` instead of `digest` (you wouldn't have to quote the resulting string), but you'll have to adjust the length of the field to 64. | Django has features for this now. See <https://docs.djangoproject.com/en/dev/topics/signing/>
Quoting that page:
"Django provides both a low-level API for signing values and a high-level API for setting and reading signed cookies, one of the most common uses of signing in Web applications.
You may also find signing useful for the following:
* Generating “recover my account” URLs for sending to users who have lost their password.
* Ensuring data stored in hidden form fields has not been tampered with.
* Generating one-time secret URLs for allowing temporary access to a protected resource, for - example a downloadable file that a user has paid for." | 97 |
11,632,154 | In python if I have two dictionaries, specifically Counter objects that look like so
```
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
```
Can I combine these dictionaries so that the results is a dictionary of lists, as follows:
```
c3 = {'item1': [4,6], 'item2':[2,2], 'item3': [5,1], 'item4': [3], 'item5': [9]}
```
where each value is a list of all the values of the preceding dictionaries from the appropriate key, and where there are no matching keys between the two original dictionaries, a new kew is added that contains a one element list. | 2012/07/24 | [
"https://Stackoverflow.com/questions/11632154",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/801348/"
] | ```
from collections import Counter
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
c3 = {}
for c in (c1, c2):
for k,v in c.iteritems():
c3.setdefault(k, []).append(v)
```
`c3` is now: `{'item1': [4, 6], 'item2': [2, 2], 'item3': [5, 1], 'item4': [3], 'item5': [9]}` | Or with a list comprehension:
```
from collections import Counter
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
merged = {}
for k in set().union(c1, c2):
merged[k] = [d[k] for d in [c1, c2] if k in d]
>>> merged
{'item2': [2, 2], 'item3': [5, 1], 'item1': [4, 6], 'item4': [3], 'item5': [9]}
```
Explanation
-----------
1. Throw all keys that exist into an anonymous set. (It's a set => no duplicate keys)
2. For every key, do 3.
3. For every dictionary d in the list of dictionaries `[c1, c2]`
* Check whether the currently being processed key `k` exists
+ If true: include the expression `d[k]` in the resulting list
+ If not: proceed with next iteration
[Here](http://docs.python.org/tutorial/datastructures.html#list-comprehensions) is a detailed introduction to list comprehension with many examples. | 98 |
51,745,894 | I am new to using python, and am wanting to be able to install packages for python using pip. I am having trouble running pip on my windows computer. When typing in "pip --version" into command prompt I get:
```
ModuleNotFoundError: No module named 'pip._internal'; 'pip' is not a package
```
I have added the scripts folder to the PATH environment variable as shown on the picture in this link
[Environment variables photo](https://i.stack.imgur.com/lXiFz.png)
(Stack overflow does not allow embedded pictures if you are new)
This is the contents of my scripts directory where pip is present:
```
Directory of C:\Users\....\AppData\Local\Programs\Python\Python37-32\Scripts
[.] [..] easy_install-3.7.exe
easy_install.exe pip-script.py pip.exe
pip.exe.manifest pip3 pip3-script.py
pip3.7-script.py pip3.7.exe pip3.7.exe.manifest
pip3.exe pip3.exe.manifest wheel.exe
```
Any help on this would be appreciated | 2018/08/08 | [
"https://Stackoverflow.com/questions/51745894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6814024/"
] | Force a reinstall of pip:
```
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3 get-pip.py --force-reinstall
```
For windows you may have to `choco install curl` or set PATH to where python3 is located | In cmd try using
`py -3.6 -m pip install pygmae`
replace 3.6 with your version of python and add -32 fot 32 bit version
```
py -3.6-32 pip install pygame
```
replace pygame with the module you want to install
this works for most people using python on windows also reboot your pc after adding system variable path | 101 |
62,713,607 | I deployed an Azure Functions App with Python `3.8`. Later on I tried to use dataclasses and it failed with the exception that the version available does not support dataclasses. I then SSHed to the host of the Function App and by using `python --version` figured out that version `3.6` was actually installed. As dataclasses are available from `3.7` on it makes sense why this module can't be used.
But what can I do to actually have version `3.8` running on the Function App host? | 2020/07/03 | [
"https://Stackoverflow.com/questions/62713607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7009990/"
] | This is a known issue (see e.g. <https://learn.microsoft.com/en-us/answers/questions/39124/azure-functions-always-using-python-36.html>) and hopefully fixed soon.
As workaround you can run the following command e.g. in the Cloud shell:
`az functionapp config set --name <func app name> --resource-group <rg name> --subscription <subscription id> --linux-fx-version "DOCKER|mcr.microsoft.com/azure-functions/python:3.0.13353-python3.8-appservice"`
After that you need to wait for a while so that the function app becomes usable again. Additionally I have made the experience that the installed packages are gone. Therefore you need also to republish your functions (having the necessary packages defined in `requirements.txt`). | For anyone running into this problem downgrading to Python 3.6 is a workaround.
I tried @quervernetzt solution but it didn't work, my pipelines started giving the following error.
```
##[error]Error: Error: Failed to deploy web package to App Service. Conflict (CODE: 409)
``` | 102 |
15,424,895 | I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <http://learnpythonthehardway.org/book/>. I've been able to struggle my way through the book up until exercise 48 & 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:
```
lexicon = [
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
]
```
Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:
```
from ex48 import lexicon
>>> print lexicon.scan("go north")
[('verb', 'go'), ('direction', 'north')]
```
Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?
Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever! | 2013/03/15 | [
"https://Stackoverflow.com/questions/15424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2172498/"
] | Based on the ex48 instructions, you could create a few lists for each kind of word. Here's a sample for the first test case. The returned value is a list of tuples, so you can append to that list for each word given.
```
direction = ['north', 'south', 'east', 'west', 'down', 'up', 'left', 'right', 'back']
class Lexicon:
def scan(self, sentence):
self.sentence = sentence
self.words = sentence.split()
stuff = []
for word in self.words:
if word in direction:
stuff.append(('direction', word))
return stuff
lexicon = Lexicon()
```
He notes that numbers and exceptions are handled differently. | Like the most here I am new to the world of coding and I though I attach my solution below as it might help other students.
I already saw a few more efficient approaches that I could implement. However, the code handles every use case of the exercise and since I am wrote it on my own with my beginners mind it does not take complicated shortcuts and should be very easy to understand for other beginners.
I therefore thought it might beneficial for someone else learning. Let me know what you think. Cheers!
```
class Lexicon(object):
def __init__(self):
self.sentence = []
self.dictionary = {
'north' : ('direction','north'),
'south' : ('direction','south'),
'east' : ('direction','east'),
'west' : ('direction','west'),
'down' : ('direction','down'),
'up' : ('direction','up'),
'left' : ('direction','left'),
'right' : ('direction','right'),
'back' : ('direction','back'),
'go' : ('verb','go'),
'stop' : ('verb','stop'),
'kill' : ('verb','kill'),
'eat' : ('verb', 'eat'),
'the' : ('stop','the'),
'in' : ('stop','in'),
'of' : ('stop','of'),
'from' : ('stop','from'),
'at' : ('stop','at'),
'it' : ('stop','it'),
'door' : ('noun','door'),
'bear' : ('noun','bear'),
'princess' : ('noun','princess'),
'cabinet' : ('noun','cabinet'),
}
def scan(self, input):
loaded_imput = input.split()
self.sentence.clear()
for item in loaded_imput:
try:
int(item)
number = ('number', int(item))
self.sentence.append(number)
except ValueError:
word = self.dictionary.get(item.lower(), ('error', item))
self.sentence.append(word)
return self.sentence
lexicon = Lexicon()
``` | 103 |
12,424,351 | I want to run a shell command from python and receive its output with subprocess.Popen. The problem is, when I close the process, sending Ctrl-C, I don't get any output. What am I doing wrong? Code:
```
>>> import subprocess
>>> sub = subprocess.Popen(["xinput", "test", "8"], stdout=subprocess.PIPE) #receive mouse events
>>> output = sub.communicate()[0].read()
^CTraceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/subprocess.py", line 693, in communicate
stdout = self.stdout.read()
KeyboardInterrupt
>>> output
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'output' is not defined
```
Inspired by this post by Jett:
[Reading stdout from xinput test in python](https://stackoverflow.com/questions/12420999/reading-stdout-from-xinput-test-in-python) | 2012/09/14 | [
"https://Stackoverflow.com/questions/12424351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1360544/"
] | The issue here is that the `KeyboardInterrupt` is sent during the call to `communicate`. As a result, `communicate` never returns and so it's output is never stored in the variable `output` and you get the `NameError` when you try to use it. One workaround would be the following:
```
import subprocess
sub = subprocess.Popen(["xinput", "test", "8"], stdout=subprocess.PIPE)
lines = [] #Need someplace to store the data as it comes
try:
for line in sub.stdout: #read one line from standard out, store it in lines
lines.append(line)
except KeyboardInterrupt: #keyboardInterrupt happened. Stop process
sub.terminate()
finally: #Join our lines into a single buffer (like `communicate`)
output = ''.join(lines)
del lines #clean `lines` out of our namespace (just because).
``` | @pythonm already explained the `NameError`.
Furthermore, you're using the output of `Popen.communicate()` conceptually wrong. It returns a 2-tuple of strings: `(stdout, stderr)`. It does not return two file-like objects. That's why your `sub.communicate()[0].read()` would fail if `communicate()` returned.
Until the subprocess returns, `communicate()` aggregates all of its stdout and stderr (considering that you provided `stdout=subprocess.PIPE` and `stderr=subprocess.PIPE` to the constructor). Only after the subprocess has terminated, you have access to what `communicate()` collected during the runtime of the subprocess.
If you would like to monitor a subprocess' output in real time, then `communicate()` is the wrong method. Run the subprocess, monitor it (within for example a loop) and interact with its `Popen.stdout` and `Popen.stderr` attributes (which are file-like objects then). @mgilson's answer shows you one way how to do it :) | 113 |
65,495,956 | I have searched far and wide, and have followed just about everything... I cannot figure out why this keeps happening to my Python package I've created. It's not a simple "install dependency and you're good" as it's my own project I am attempting to create.
Here's my file structure:
```
-jarvis-discord
--jarvis_discord_bot
---__init__.py
---jarvis.py
---config.py
---cogs
----__init__.py
----all the cogs are here
```
The error given:
```
++ PWD
line 3: PWD: command not found
export PYTHONPATH=
PYTHONPATH=
python3 jarvis_discord_bot/jarvis.py
Traceback (most recent call last):
File "/buddy/jarvis-discord/jarvis_discord_bot/jarvis.py", line 40, in <module>
from jarvis_discord_bot.cogs import (
ModuleNotFoundError: No module named 'jarvis_discord_bot'
```
I've tried creating a `pipenv` as well and have had no luck either. Same error as above. There's something wrong with how I'm setting up my Python environment... granted I'm also a newbie.
The weird thing, to top this all off, is that it runs locally on my own machine just fine. So I am at a complete and utter loss for what to do and could use some help and direction on where to go from here.
Thanks! | 2020/12/29 | [
"https://Stackoverflow.com/questions/65495956",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13002900/"
] | If you are using relative file paths, you have to use
`from .cogs import (`
because it jarvis.py can't see jarvis\_discord\_bot from one level below.
The . in front of cogs means that it is one level up. | Figured out what was the issue!
In my run file, I had to set `PYTHONPATH` from `PWD` to the actual folder of the project. Good luck to anyone reading this in the future! | 114 |
50,151,698 | i have two table like this:
```
table1
id(int) | desc(TEXT)
--------------------
0 | "desc1"
1 | "desc2"
table2
id(int) | table1_id(TEXT)
------------------------
0 | "0"
1 | "0;1"
```
i want to select data into table2 and replace table1\_id by the desc field in table1, when i have string with ';' separator it means i have multiple selections.
im able to do it for single selection like this
```
SELECT table1.desc
FROM table2 LEFT JOIN table1 ON table1.id = CAST(table2.table1_id as integer);
```
Output wanted with a SELECT on table2 where id = 1:
```
"desc"
------
"desc1, desc2"
```
Im using Postgresql10, python3.5 and sqlalchemy
I know how to do it by extracting data and processing it with python then query again but im looking for a way to do it with one SQL query.
PS: I cant modify the table2. | 2018/05/03 | [
"https://Stackoverflow.com/questions/50151698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5494686/"
] | You can convert the CSV value into an array, then join on that:
```
select string_agg(t1.descr, ',') as descr
from table2 t2
join table1 t1 on t1.id = any (string_to_array(t2.table1_id, ';')::int[])
where t2.id = 1
``` | That is really an abominable data design.
Consequently you will have to write a complicated query to get your desired result:
```
SELECT string_agg(table1."desc", ', ')
FROM table2
CROSS JOIN LATERAL regexp_split_to_table(table2.table1_id, ';') x(d)
JOIN table1 ON x.d::integer = table1.id
WHERE table2.id = 1;
string_agg
--------------
desc1, desc2
(1 row)
``` | 115 |
64,791,458 | Here is my docker-compose.yml used to create the database container.
```
version: '3.7'
services:
application:
build:
context: ./app
dockerfile: dockerfile #dockerfile-prod
depends_on:
- database_mongo
- database_neo4j
- etl_pipeline
environment:
- flask_env=dev #flask_env=prod
volumes:
- ./app:/app
ports:
- "8080:8080" #- 8080:8080
database_mongo:
image: "mongo:4.2"
expose:
- 27017
volumes:
- ./data/database/mongo:/data/db
database_neo4j:
image: neo4j:latest
expose:
- 27018
volumes:
- ./data/database/neo4j:/data
ports:
- "7474:7474" # web client
- "7687:7687" # DB default port
environment:
- NEO4J_AUTH=none
etl_pipeline:
depends_on:
- database_mongo
- database_neo4j
build:
context: ./data/etl
dockerfile: dockerfile #dockerfile-prod
volumes:
- ./data/:/data/
- ./data/etl:/app/
```
I'm trying to connect to my neo4j database with python driver. I have already been able to connect to mongoDb with this line:
```
mongo_client = MongoClient(host="database_mongo")
```
I'm trying to do something similar to the mongoDb to connect to my neo4j with the GraphDatabase in neo4j like this:
```
url = "{scheme}://{host_name}:{port}".format(scheme = "bolt", host_name="database_neo4j", port = 7687)
baseNeo4j = GraphDatabase.driver(url, encrypted=False)
```
or with py2neo like this
```
neo_client = Graph(host="database_neo4j")
```
However, nothing of this has worked yet and so I'm not sure if I'm using the right syntax in order to use neo4j with docker. I've tried many things and looked around, but couldn't find the answer...
The whole error message is:
```
etl_pipeline_1 | MongoClient(host=['database_mongo:27017'], document_class=dict, tz_aware=False, connect=True)
etl_pipeline_1 | Traceback (most recent call last):
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 929, in _connect
etl_pipeline_1 | s.connect(resolved_address)
etl_pipeline_1 | ConnectionRefusedError: [Errno 111] Connection refused
etl_pipeline_1 |
etl_pipeline_1 | During handling of the above exception, another exception occurred:
etl_pipeline_1 |
etl_pipeline_1 | Traceback (most recent call last):
etl_pipeline_1 | File "main.py", line 26, in <module>
etl_pipeline_1 | baseNeo4j = GraphDatabase.driver(url, encrypted=False)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 183, in driver
etl_pipeline_1 | return cls.bolt_driver(parsed.netloc, auth=auth, **config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 196, in bolt_driver
etl_pipeline_1 | return BoltDriver.open(target, auth=auth, **config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 359, in open
etl_pipeline_1 | pool = BoltPool.open(address, auth=auth, pool_config=pool_config, workspace_config=default_workspace_config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 531, in open
etl_pipeline_1 | seeds = [pool.acquire() for _ in range(pool_config.init_size)]
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 531, in <listcomp>
etl_pipeline_1 | seeds = [pool.acquire() for _ in range(pool_config.init_size)]
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 545, in acquire
etl_pipeline_1 | return self._acquire(self.address, timeout)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 409, in _acquire
etl_pipeline_1 | connection = self.opener(address, timeout)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 528, in opener
etl_pipeline_1 | return Bolt.open(addr, auth=auth, timeout=timeout, routing_context=routing_context, **pool_config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 198, in open
etl_pipeline_1 | keep_alive=pool_config.keep_alive,
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 1049, in connect
etl_pipeline_1 | raise last_error
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 1039, in connect
etl_pipeline_1 | s = _connect(resolved_address, timeout, keep_alive)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 943, in _connect
etl_pipeline_1 | raise ServiceUnavailable("Failed to establish connection to {!r} (reason {})".format(resolved_address, error))
etl_pipeline_1 | neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv4Address(('172.29.0.2', 7687)) (reason [Errno 111] Connection refused)
``` | 2020/11/11 | [
"https://Stackoverflow.com/questions/64791458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14620901/"
] | Usually languages implement functionalities as simply as possible.
Class methods are under the hood just simple functions containing object pointer as an argument, where object in fact is just data structure + functions that can operate on this data structure.
Normally compiler knows which function should operate on the object.
However if there is a case of polymorphism where function may be overriden.
Then compiler doesn't know what is the type of class, it may be Derived1 or Derived2.
Then compiler will add a VTable to this object that will contain function pointers to functions that could have been overridden.
Then for overridable methods the program will make a lookup in this table to see which function should be executed.
You can see how it can be implemented by seeing how polymorphism can be implemented in C:
[How can I simulate OO-style polymorphism in C?](https://stackoverflow.com/questions/524033/how-can-i-simulate-oo-style-polymorphism-in-c) | No, it does not. Functions are class-wide. When you allocate an object in C++ it will contain space for all its attributes plus a VTable with pointers to all its methods/functions, be it from its own class or inherited from parent classes.
When you call a method on that object, you essentially perform a look-up on that VTable and the appropriate method is called. | 116 |
45,155,336 | I am running Ubuntu Desktop 16.04 on a VM and am trying to run [Volttron](https://github.com/VOLTTRON/volttron) using the standard install instructions, however I keep getting an error after the following steps:
```
sudo apt-get update
sudo apt-get install build-essential python-dev openssl libssl-dev libevent-dev git
git clone https://github.com/VOLTTRON/volttron
cd volttron
python bootstrap.py
```
My problem is with the last step `python bootstrap.py`. As soon as I get to this step, I get the error `bootstrap.py: error: refusing to run as root to prevent potential damage.` from my terminal window.
Has anyone else encountered this problem? Thoughts? | 2017/07/17 | [
"https://Stackoverflow.com/questions/45155336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8322226/"
] | I would recommend passing in the name of the value you would like to update into the handle change function, for example:
```
import React, { Component } from 'react'
import { Dropdown, Grid } from 'semantic-ui-react'
class DropdownExampleRemote extends Component {
componentWillMount() {
this.setState({
optionsMembers: [
{ key: 1, text: 'DAILY', value: 'DAILY' },
{ key: 2, text: 'MONTHLY', value: 'MONTHLY' },
{ key: 3, text: 'WEEKLY', value: 'WEEKLY' },
],
optionsDays: [
{ key: 1, text: 'SUNDAY', value: 'SUNDAY' },
{ key: 2, text: 'MONDAY', value: 'MONDAY' },
{ key: 3, text: 'TUESDAY', value: 'TUESDAY' },
],
value: '',
member: '',
day: '',
})
}
handleChange = (value, key) => {
this.setState({ [key]: value });
}
render() {
const {optionsMembers, optionsDays, value, member, day } = this.state
return (
<Grid>
<Grid.Column width={6}>
<Dropdown
selection
options={optionsMembers}
value={member}
placeholder='Select Member'
onChange={(e,{value})=>this.handleChange(value, 'member')}
/>
</Grid.Column>
<Grid.Column width={6}>
<Dropdown
selection
options={optionsDays}
value={day}
placeholder='Select Day'
onChange={(e,{value})=>this.handleChange(value, 'day')}
/>
</Grid.Column>
<Grid.Column width={4}>
<div>{member}</div>
<div>{day}</div>
</Grid.Column>
</Grid>
)
}
}
export default DropdownExampleRemote
``` | Something along these lines can maybe work for you.
```
handleChange = (propName, e) => {
let state = Object.assign({}, state);
state[propName] = e.target.value;
this.setState(state)
}
```
You can pass in the name of the property you want to update and then use bracket notation to update that part of your state.
Hope this helps. | 117 |
53,435,428 | After reading all the existing post related to this issue, i still did not manage to fix it.
```
ModuleNotFoundError: No module named 'plotly'
```
I have tried all the following:
```
pip3 install plotly
pip3 install plotly --upgrade
```
as well as uninstalling plotly with:
```
pip3 uninstall plotly
```
And reinstalling it again, i get the following on terminal:
```
Requirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.11.0)
Requirement already satisfied, skipping upgrade: nbformat>=4.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.3.3)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (1.24.1)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2.7)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2018.10.15)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter-core in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.3.2)
Requirement already satisfied, skipping upgrade: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (0.2.0)
import plotly
import plotly.plotly as py
```
yield:
```
ModuleNotFoundError: No module named 'plotly'
```
my version of pip(3) as well as python(3) seem to be both fine
May somebody please help?
Using Python3 on Atom 1.32.2 x64 | 2018/11/22 | [
"https://Stackoverflow.com/questions/53435428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10438271/"
] | Just run this to uninstall plotly and then build it from source. That should fix the import
```
pip uninstall plotly && python -m pip install plotly
``` | That sounds like a classic dependency issue.
* Check that your pip version is using the same python version (3.6) as what you launch your script with (IE: Use `python3(.6)` to launch your script, not just `python`)
* Your logs aren't showing plotly already installed. In fact, you probably forgot a line when pasting but installing with `pip3.6 install -U plotly` should install the package if not already installed. | 118 |
73,646,583 | In short, is there a pythonic way to write `SETTING_A = os.environ['SETTING_A']`?
I want to provide a module `environment.py` from which I can import constants that are read from environment variables.
##### Approach 1:
```
import os
try:
SETTING_A = os.environ['SETTING_A']
SETTING_B = os.environ['SETTING_B']
SETTING_C = os.environ['SETTING_C']
except KeyError as e:
raise EnvironmentError(f'env var {e} is not defined')
```
##### Approach 2
```
import os
vs = ('SETTING_A', 'SETTING_B', 'SETTING_C')
try:
for v in vs:
locals()[v] = os.environ[v]
except KeyError as e:
raise EnvironmentError(f'env var {e} is not defined')
```
Approach 1 repeats the names of the variables, approach 2 manipulates `locals` and it's harder to see what constants will be importable from the module.
Is there a best practice to this problem? | 2022/09/08 | [
"https://Stackoverflow.com/questions/73646583",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10909217/"
] | You should describe the type of PersonDto:
```js
interface PersonDto {
id: string;
name: string;
country: string;
}
class Person {
private id: string;
private name: string;
private country: string;
constructor(personDto: PersonDto) {
this.id = personDto.id;
this.name = personDto.name;
this.country = personDto.country;
}
}
const data = {
"id": "1234fc8-33aa-4a39-9625-b435479e6328",
"name": "02_Aug 10:00",
"country": "UK"
};
const person = new Person(data);
console.log(person);
```
In a case you a sure that all PersonDto properties are string - you can simplify the type description:
`type PersonDto = { [key: string]: string };` | Try [`Object.assign`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) to not have to type every property.
```typescript
interface PersonDto {
id: string;
name: string;
country: string;
}
class Person {
private id: string;
private name: string;
private country: string;
constructor(personDto: PersonDto) {
Object.assign(this, personDto);
}
}
const data = {
id: "1234fc8-33aa-4a39-9625-b435479e6328",
name: "02_Aug 10:00",
country: "UK"
};
const person = new Person(data);
console.log(person);
``` | 128 |
21,890,220 | tried multiplication of 109221975\*123222821 in python 2.7 prompt in two different ways
```
Python 2.7.3 (default, Sep 26 2013, 20:08:41)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 109221975*123222821
13458639874691475L
>>> 109221975*123222821.0
1.3458639874691476e+16
>>> int(109221975.0*123222821.0)
13458639874691476L
>>> 109221975*123222821 == int(109221975.0*123222821.0)
False
>>>
```
what I am suspecting here is that there is some precision inconsistency which is causing such problem , is it possible to speculate when can inconsistency like this happen ? | 2014/02/19 | [
"https://Stackoverflow.com/questions/21890220",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1955093/"
] | Your `int` is 54 bits long. `float` can hold 53 significant digits, so effectively the last digit is rounded to an even number.
Internally, your float is represented as:
>
> 2225720309975242\*2-1
>
>
>
Your `int` and `float` is stored in binary like the following:
```
101111110100001000111111001000111001000001000110010011
0 10000110100 0111111010000100011111100100011100100000100011001010
```
For `float`, the first part is the **sign**, the second is the **exponent**, and the third is the **significand**. Because space is allocated for an exponent, there isn't enough room left over for the significant digits
How I aligned the two representations, you can see the data is the same, but the `int` needs one extra digit on the right, while the `float` uses more (in this case wasted) space on the left | Because `int` in python has infinite precision, but `float` does not. (`float` is a double precision floating point number, which has 53 bits of precision.) | 129 |
66,395,018 | I am new to python. at the moment I am coding a game with a friend. we are currently working on a combat system the only problem is we don't know how to update the the enemy's health once damage has been dealt. The code is as following.
```
enemy1_health = 150
broadsword_attack = 20
rusty_knife = 10.5
attacks = ["broadsword swing " + str(broadsword_attack), "rusty knife jab " + str(rusty_knife) ]
while enemy1_health > 0:
while player_health > 0:
enemy1_health = 150
print(attacks)
attackchoice = input("Choose an attack: ")
if attackchoice == ("broadsword swing"):
print (int(enemy1_health - 20))
if attackchoice == ("rusty knife jab"):
print (int(enemy1_health - 10.5))
print("you died")
quit()
print("you cleared the level")```
``` | 2021/02/27 | [
"https://Stackoverflow.com/questions/66395018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15293735/"
] | You need to change the enemy health outside of the print statement with a statement like this:
```
enemy1_health = enemy1_health - 20
```
or like this, which does the same thing:
```
enemy1_health -= 20
```
You also reset enemy1\_health every time the loop loops, remove that.
You don't define player\_health, define that.
Your loop goes forever until you die.
So your code should end up looking more like this:
```
enemy1_health = 150
broadsword_attack = 20
rusty_knife = 10.5
player_health = 100
attacks = ["broadsword swing " + str(broadsword_attack), "rusty knife jab " + str(rusty_knife)]
while enemy1_health > 0:
print(attacks)
attackchoice = input("Choose an attack: ")
if attackchoice == ("broadsword swing"):
enemy1_health -= 20
if attackchoice == ("rusty knife jab"):
enemy1_health -= 10.5
print(enemy1_health)
if player_health <= 0:
print("you died")
quit()
print("you cleared the level")
```
This still requires quite a bit of tweaking, it'd be a complete working game if it was like this (basically, you win if you spam broadsword attacks because they do more damage):
```
enemy1_health = 150
enemy1_attack = 10
player_health = 100
broadsword_attack = 20
rusty_knife = 10.5
attacks = ["broadsword swing " + str(broadsword_attack), "rusty knife jab " + str(rusty_knife)]
while enemy1_health > 0:
print(attacks)
attackchoice = input("Choose an attack: ")
if attackchoice == ("broadsword swing"):
enemy1_health -= broadsword_attack
if attackchoice == ("rusty knife jab"):
enemy1_health -= rusty_knife
print(f'A hit! The enemy has {enemy1_health} health left.')
if enemy1_health > 0:
player_health -= enemy1_attack
print(f'The enemy attacks and leaves you with {player_health} health.')
if player_health <= 0:
print("you died")
quit()
print("you cleared the level")
``` | You need to change the enemy health outside the print statement.
do:
```
if attackchoice == ("rusty knife jab"):
enemy1_health = enemy1_health - 10.5
print(enemy1_health)
```
and you can do the same for the other attacks.
You also have enemy health defined in the while loop. you need to define it outside of the loop. | 131 |
44,659,242 | During development of Pylint, we encountered [interesting problem related to non-dependency that may break `pylint` package](https://github.com/PyCQA/pylint/issues/1318).
Case is following:
* `python-future` had a conflicting alias to `configparser` package. [Quoting official docs](http://python-future.org/whatsnew.html#what-s-new-in-version-0-16-0-2016-10-27):
>
> This release removes the configparser package as an alias for ConfigParser on Py2 to improve compatibility with Lukasz Langa’s backported configparser package. Previously python-future and the configparser backport clashed, causing various compatibility issues. (Issues #118, #181)
>
>
>
* `python-future` itself **is not** a dependency of Pylint
What would be a standard way to enforce *if python-future is present, force it to 0.16 or later* limitation? I want to avoid defining dependency as `future>=0.16` - by doing this I'd force users to install package that they don't need and won't use in a general case. | 2017/06/20 | [
"https://Stackoverflow.com/questions/44659242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2912340/"
] | ```
kw = {}
try:
import future
except ImportError:
pass
else:
kw['install_requires'] = ['future>=0.16']
setup(
…
**kw
)
``` | One workaround for this issue is to define this requirement only for the `all` target, so only if someone adds `pylint[all]>=1.2.3` as a requirement they will have futures installed/upgraded.
At this moment I don't know another way to "ignore or upgrade" a dependency.
Also, I would avoid adding Python code to `setup.py` in order to make it "smart",... is a well known distribution anti-pattern ;) | 133 |
37,369,079 | I have a lab colorspace
[![enter image description here](https://i.stack.imgur.com/3pXgm.png)](https://i.stack.imgur.com/3pXgm.png)
And I want to "bin" the colorspace in a grid of 10x10 squares.
So the first bin might be (-110,-110) to (-100,-100) then the next one might be (-100,-110) to (-90,-100) and so on. These bins could be bin 1 and bin 2
I have seen np.digitize() but it appears that you have to pass it 1-dimensional bins.
A rudimentary approach that I have tried is this:
```
for fn in filenames:
image = color.rgb2lab(io.imread(fn))
ab = image[:,:,1:]
width,height,d = ab.shape
reshaped_ab = np.reshape(ab,(width*height,d))
print reshaped_ab.shape
images.append(reshaped_ab)
all_abs = np.vstack(images)
all_abs = shuffle(all_abs,random_state=0)
sns
df = pd.DataFrame(all_abs[:3000],columns=["a","b"])
top_a,top_b = df.max()
bottom_a,bottom_b = df.min()
range_a = top_a-bottom_a
range_b = top_b-bottom_b
corner_a = bottom_a
corner_b = bottom_b
bins = []
for i in xrange(int(range_a/10)):
for j in xrange(int(range_b/10)):
bins.append([corner_a,corner_b,corner_a+10,corner_b+10])
corner_b = bottom_b+10
corner_a = corner_a+10
```
but the "bins" that results seem kinda sketchy. For one thing there are many empty bins as the color space does have values in a square arrangement and that code pretty much just boxes off from the max and min values. Additionally, the rounding might cause issues. I am wondering if there is a better way to do this? I have heard of color histograms which count the values in each "bin". I don't need the values but the bins are I think what I am looking for here.
Ideally the bins would be an object that each have a label. So I could do bins.indices[0] and it would return the bounding box I gave it. Then also I could bin each observation, like if a new color was color = [15.342,-6.534], color.bin would return 15 or the 15th bin.
I realize this is a lot to ask for, but I think it must be a somewhat common need for people working with color spaces. So is there any python module or tool that can accomplish what I'm asking? How would you approach this? thanks! | 2016/05/21 | [
"https://Stackoverflow.com/questions/37369079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1123905/"
] | The answer is to not use SSTATE\_DUPWHITELIST for this at all. Instead, in the libftdi recipe's do\_install (or do\_install\_append, if the recipe itself doesn't define its own do\_install) you should delete the duplicate files from within ${D} and then they won't get staged and the error won't occur. | I managed to solve this problem by adding the SSTATE\_DUPWHITELIST to the bitbake recipe of the package as follows:
SSTATE\_DUPWHITELIST = "${TMPDIR}/PATH/TO/THE/FILES"
I added the absolute path of all of the 6,7 files that had the conflict to the list. I did that because they were basically coming from a same source and it was all safe to do that. correct me if there is a better way though.
Hope this helps someone! | 135 |
70,008,841 | I was able to follow this example1 and let my ec2 instance read from S3.
In order to write to the same bucket I thought changing line 572 from `grant_read()` to `grant_read_write()`
should work.
```py
...
# Userdata executes script from S3
instance.user_data.add_execute_file_command(
file_path=local_path
)
# asset.grant_read(instance.role)
asset.grant_read_write(instance.role)
...
```
Yet the documented3 function cannot be accessed according to the error message.
```
>> 57: Pyright: Cannot access member "grant_read_write" for type "Asset"
```
What am I missing?
---
1 <https://github.com/aws-samples/aws-cdk-examples/tree/master/python/ec2/instance>
2 <https://github.com/aws-samples/aws-cdk-examples/blob/master/python/ec2/instance/app.py#L57>
3 <https://docs.aws.amazon.com/cdk/latest/guide/permissions.html#permissions_grants> | 2021/11/17 | [
"https://Stackoverflow.com/questions/70008841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172907/"
] | This is the [documentation](https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_s3_assets/Asset.html) for Asset:
>
> An asset represents a local file or directory, which is automatically
> uploaded to S3 and then can be referenced within a CDK application.
>
>
>
The method grant\_read\_write isn't provided, as it is pointless. The documentation you've linked doesn't apply here. | an asset is just a Zip file that will be uploded to the bootstraped CDK s3 bucket, then referenced by Cloudformation when deploying.
if you have an script you want ot put into an s3 bucket, you dont want to use any form of asset cause that is a zip file. You would be better suited using a boto3 command to upload it once the bucket already exists, or making it part of a codePipeline to create the bucket with CDK then the next step in the pipeline uploads it.
the grant\_read\_write is for `aws_cdk.aws_s3.Bucket` constructs in this case. | 138 |
2,433,703 | I am running Cygwin Python version 2.5.2.
I have a three-line source file, called import.py:
```
#!/usr/bin/python
import xml.etree.ElementTree as ET
print "Success!"
```
When I execute "python import.py", it works:
```
C:\Temp>python import.py
Success!
```
When I run the python interpreter and type the commands, it works:
```
C:\Temp>python
Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14)
[GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
>>> #!/usr/bin/python
... import xml.etree.ElementTree as ET
>>> print "Success!"
Success!
>>>
```
But when I execute "import.py', it does not work:
```
C:\Temp>which python
/usr/bin/python
C:\Temp>import.py
Traceback (most recent call last):
File "C:\Temp\import.py", line 2, in ?
import xml.etree.ElementTree as ET
ImportError: No module named etree.ElementTree
```
When I remove the first line (#!/usr/bin/python), I get the same error. I need that line in there, though, for when this script runs on Linux. And it works fine on Linux.
Any ideas?
Thanks. | 2010/03/12 | [
"https://Stackoverflow.com/questions/2433703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5397/"
] | Probably py extension is connected to some other python interpreter than the one in /usr/bin/python | Try:
```
./import.py
```
Most people don't have "." in their path.
just typing python will call the cygwin python.
import.py will likely call whichever python is associated with .py files under windows.
You are using two different python executables. | 139 |
1,550,226 | I have installed a python package with `python setup.py install`.
How do I uninstall it? | 2009/10/11 | [
"https://Stackoverflow.com/questions/1550226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/63051/"
] | At `{virtualenv}/lib/python2.7/site-packages/` (if not using virtualenv then `{system_dir}/lib/python2.7/dist-packages/`)
* Remove the egg file (e.g. `distribute-0.6.34-py2.7.egg`)
* If there is any from file `easy-install.pth`, remove the corresponding line (it should be a path to the source directory or of an egg file). | **Install from local**
`python setup.py install`
**Uninstall from local**
`pip uninstall mypackage` | 144 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 29