qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
41,850,558 | I have a model called "document-detail-sample" and when you call it with a GET, something like this, **GET** `https://url/document-detail-sample/` then you get every "document-detail-sample".
Inside the model is the id. So, if you want every Id, you could just "iterate" on the list and ask for the id. Easy.
But... the front-end Developers don't want to do it :D they say it's too much work...
So, I gotta return the id list. :D
I was thinking something like **GET** `https://url/document-detail-sample/id-list`
But I don't know how to return just a list. I read [this post](https://stackoverflow.com/questions/27647871/django-python-how-to-get-a-list-of-ids-from-a-list-of-objects) and I know how to get the id\_list in the backend. But I don't know what should I implement to just return a list in that url...
the view that I have it's pretty easy:
```
class DocumentDetailSampleViewSet(viewsets.ModelViewSet):
queryset = DocumentDetailSample.objects.all()
serializer_class = DocumentDetailSampleSerializer
```
and the url is so:
```
router.register(r'document-detail-sample', DocumentDetailSampleViewSet)
```
so:
**1**- is a good Idea do it with an url like `.../document-detail-sample/id-list"` ?
**2**- if yes, how can I do it?
**3**- if not, what should I do then? | 2017/01/25 | [
"https://Stackoverflow.com/questions/41850558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4050960/"
] | You could use `@list_route` decorator
```
from rest_framework.decorators import detail_route, list_route
from rest_framework.response import Response
class DocumentDetailSampleViewSet(viewsets.ModelViewSet):
queryset = DocumentDetailSample.objects.all()
serializer_class = DocumentDetailSampleSerializer
@list_route()
def id_list(self, request):
q = self.get_queryset().values('id')
return Response(list(q))
```
This decorator allows you provide additional endpoint with the same name as a method. `/document-detail-sample/id_list/`
[reference to docs about extra actions in a viewset](http://www.django-rest-framework.org/api-guide/viewsets/#marking-extra-actions-for-routing) | Assuming you don't need pagination, just override the `list` method like so
```
class DocumentDetailSampleViewSet(viewsets.ModelViewSet):
queryset = DocumentDetailSample.objects.all()
serializer_class = DocumentDetailSampleSerializer
def list(self, request):
return Response(self.get_queryset().values_list("id", flat=True))
``` | 0 |
14,585,722 | Suppose you have a python function, as so:
```
def foo(spam, eggs, ham):
pass
```
You could call it using the positional arguments only (`foo(1, 2, 3)`), but you could also be explicit and say `foo(spam=1, eggs=2, ham=3)`, or mix the two (`foo(1, 2, ham=3)`).
Is it possible to get the same kind of functionality with argparse? I have a couple of positional arguments with keywords, and I don't want to define all of them when using just one. | 2013/01/29 | [
"https://Stackoverflow.com/questions/14585722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/731881/"
] | You can do something like this:
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('foo',nargs='?',default=argparse.SUPPRESS)
parser.add_argument('--foo',dest='foo',default=None)
parser.add_argument('bar',nargs='?',default=argparse.SUPPRESS)
parser.add_argument('--bar',dest='bar',default=None)
parser.add_argument('baz',nargs='?',default=argparse.SUPPRESS)
parser.add_argument('--baz',dest='baz',default=None)
print parser.parse_args()
```
which works mostly as you describe:
```
temp $ python test.py 1 2 --baz=3
Namespace(bar='2', baz='3', foo='1')
temp $ python test.py --baz=3
Namespace(bar=None, baz='3', foo=None)
temp $ python test.py --foo=2 --baz=3
Namespace(bar=None, baz='3', foo='2')
temp $ python test.py 1 2 3
Namespace(bar='2', baz='3', foo='1')
```
python would give you an error for the next one in the function call analogy, but argparse will allow it:
```
temp $ python test.py 1 2 3 --foo=27.5
Namespace(bar='2', baz='3', foo='27.5')
```
You could probably work around that by using [mutually exclusive groupings](http://docs.python.org/2.7/library/argparse.html#mutual-exclusion) | I believe this is what you are looking for [Argparse defaults](http://docs.python.org/dev/library/argparse.html#default) | 1 |
72,950,868 | I would like to add a closing parenthesis to strings that have an open parenthesis but are missing a closing parenthesis.
For instance, I would like to modify "The dog walked (ABC in the park" to be "The dog walked (ABC) in the park".
I found a similar question and solution but it is in Python ([How to add a missing closing parenthesis to a string in Python?](https://stackoverflow.com/questions/67400960/how-to-add-a-missing-closing-parenthesis-to-a-string-in-python)). I have tried to modify the code to be used in R but to no avail. Can someone help me with this please?
I have tried modifying the original python solution as R doesn't recognise the "r" and "\" has been replaced by "\\" but this solution doesn't work properly and does not capture the string preceded before the bracket I would like to add:
```
text = "The dog walked (ABC in the park"
str_replace_all(text, '\\([A-Z]+(?!\\))\\b', '\\)')
text
```
The python solution that works is as follows:
```
text = "The dog walked (ABC in the park"
text = re.sub(r'(\([A-Z]+(?!\))\b)', r"\1)", text)
print(text)
``` | 2022/07/12 | [
"https://Stackoverflow.com/questions/72950868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19533566/"
] | Try this
```
stringr::str_replace_all(text, '\\([A-Z]+(?!\\))\\b', '\\0\\)')
```
* output
```
"The dog walked (ABC) in the park"
``` | Not a one liner, but it does the trick and is (hopefully!) intuitive.
```
library(stringr)
add_brackets = function(text) {
brackets = str_extract(text, "\\([:alpha:]+") # finds the open bracket and any following letters
brackets_new = paste0(brackets, ")") # adds in the closing brackets
str_replace(text, paste0("\\", brackets), brackets_new) # replaces the unclosed string with the closed one
}
```
```
> add_brackets(text)
[1] "The dog walked (ABC) in the park"
``` | 4 |
67,609,973 | I chose to use Python 3.8.1 Azure ML in Azure Machine learning studio, but when i run the command
`!python train.py`, it uses python Anconda 3.6.9, when i downloaded python 3.8 and run the command `!python38 train.py` in the same dir as before, the response was `python3.8: can't open file` .
Any idea?
Also Python 3 in azure, is always busy, without anything running from my side.
Thank you. | 2021/05/19 | [
"https://Stackoverflow.com/questions/67609973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14915505/"
] | You should try adding a new Python 3.8 Kernel. Here and instructions how to add a new Kernel: <https://learn.microsoft.com/en-us/azure/machine-learning/how-to-access-terminal#add-new-kernels> | Yeah I understand your pain point, and I agree that calling bash commands in a notebook cell should execute in the same conda environment as the one associated with the selected kernel of the notebook. I think this is bug, I'll flag it to the notebook feature team, but I encourage you to open a priority support ticket if you want to ensure that your problem is addressed! | 7 |
58,483,706 | I am new to python and trying my hands on certain problems. I have a situation where I have 2 dataframe which I want to combine to achieve my desired dataframe.
I have tried .merge and .join, both of which was not able to get my desired outbcome.
let us suppose I have the below scenario:
```
lt = list(['a','b','c','d','a','b','a','b'])
df = pd.DataFrame(columns = lt)
data = [[10,11,12,12], [15,14,12,10]]
df1 = pd.DataFrame(data, columns = ['a','b','c','d'])
```
I want df and df1 to be combined and get desired dataframe as df2 as:
```
a b c d a b a b
0 10 11 12 12 10 11 10 11
1 15 14 12 10 15 14 15 14
``` | 2019/10/21 | [
"https://Stackoverflow.com/questions/58483706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11378087/"
] | If you don't mind the order of the columns changing, this is just a right join. The only caveat is that those are performed on rows rather than columns, so you need to transpose first:
```py
In [44]: df.T.join(df1.T, how='right').T
Out[44]:
a a a b b b c d
0 10 10 10 11 11 11 12 12
1 15 15 15 14 14 14 12 10
``` | Use [`concat()`](https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html)
```py
pd.concat([df, df1], axis=0, join='inner', sort=False)
a b c d a b a b
0 10 11 12 12 10 11 10 11
1 15 14 12 10 15 14 15 14
``` | 8 |
14,187,973 | Simmilar question (related with Python2: [Python: check if method is static](https://stackoverflow.com/questions/8727059/python-check-if-method-is-static))
Lets concider following class definition:
```
class A:
def f(self):
return 'this is f'
@staticmethod
def g():
return 'this is g'
```
In Python 3 there is no `instancemethod` anymore, everything is function, so the answer related to Python 2 will not work anymore.
As I told, everything is function, so we can call `A.f(0)`, but of course we cannot call `A.f()` (argument missmatch). But if we make an instance `a=A()` and we call `a.f()` Python passes to the function `A.f` the `self` as first argument. Calling `a.g()` prevents from sending it or captures the `self` - so there have to be a way to test if this is staticmethod or not.
So can we check in Python3 if a method was declared as `static` or not? | 2013/01/06 | [
"https://Stackoverflow.com/questions/14187973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/889902/"
] | ```
class A:
def f(self):
return 'this is f'
@staticmethod
def g():
return 'this is g'
print(type(A.__dict__['g']))
print(type(A.g))
<class 'staticmethod'>
<class 'function'>
``` | I needed this solution and wrote the following based on the answer from @root
```
def is_method_static(cls, method_name):
# http://stackoverflow.com/questions/14187973/python3-check-if-method-is-static
for c in cls.mro():
if method_name in c.__dict__:
return isinstance(c.__dict__[method_name], staticmethod)
raise RuntimeError("Unable to find %s in %s" % (method_name, cls.__name__))
``` | 10 |
46,132,431 | I have written code to generate numbers from 0500000000 to 0500000100:
```
def generator(nums):
count = 0
while count < 100:
gg=print('05',count, sep='')
count += 1
g = generator(10)
```
as I use linux, I thought I may be able to use this command `python pythonfilename.py >> file.txt`
Yet, I get an error.
So, before `g = generator(10)` I added:
```
with open('file.txt', 'w') as f:
f.write(gg)
f.close()
```
but I got an error:
>
> TypeError: write() argument must be str, not None
>
>
>
Any solution? | 2017/09/09 | [
"https://Stackoverflow.com/questions/46132431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5548783/"
] | Here I've assumed we're laying out two general images, rather than plots. If your images are actually plots you've created, then you can lay them out as a single image for display using `gridExtra::grid.arrange` for grid graphics or `par(mfrow=c(1,2))` for base graphics and thereby avoid the complications of laying out two separate images.
I'm not sure if there's a "natural" way to left justify the left-hand image and right-justify the right-hand image. As a hack, you could add a blank "spacer" image to separate the two "real" images and set the widths of each image to match paper-width minus 2\*margin-width.
Here's an example where the paper is assumed to be 8.5" wide and the right and left margins are each 1":
```
---
output: pdf_document
geometry: margin=1in
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE)
library(ggplot2)
library(knitr)
# Create a blank image to use for spacing
spacer = ggplot() + theme_void() + ggsave("spacer.png")
```
```{r, out.width=c('2.75in','1in','2.75in')}
include_graphics(c("Rplot59.png","spacer.png", "Rplot60.png"))
```
```
And here's what the document looks like:
[![enter image description here](https://i.stack.imgur.com/jiqHx.png)](https://i.stack.imgur.com/jiqHx.png) | Put them in the same code chunk and do not use align. Let them use html.
THis has worked for me.
```
````{r echo=FALSE, fig.height=3.0, fig.width=3.0}
#type your code here
ggplot(anscombe, aes(x=x1 , y=y1)) + geom_point()
+geom_smooth(method="lm") +
ggtitle("Results for x1 and y1 ")
ggplot(anscombe, aes(x=x2 , y=y2)) + geom_point() +geom_smooth(method="lm") +
ggtitle("Results for x2 and y2 ")
ggplot(anscombe, aes(x=x3 , y=y3)) + geom_point() +geom_smooth(method="lm") +
ggtitle("Results for x3 and y3 ")
ggplot(anscombe, aes(x=x4 , y=y4)) + geom_point() +geom_smooth(method="lm") +
ggtitle("Results for x4 and y4 ")
````
``` | 13 |
54,007,542 | input is like:
```
text="""Hi Team from the following Server :
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>"""
```
In output i want these 2 lines only, want to remove table tag with data in python:
Hi Team from the following Server :
Please archive the following Project Areas : | 2019/01/02 | [
"https://Stackoverflow.com/questions/54007542",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9901523/"
] | Use `BeautifulSoup` to parse HTML
**Ex:**
```
from bs4 import BeautifulSoup
text="""<p>Hi Team from the following Server :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>"""
soup = BeautifulSoup(text, "html.parser")
for p in soup.find_all("p"):
print(p.text)
```
**Output:**
```
Hi Team from the following Server :
Please archive the following Project Areas :
``` | You can use `HTMLParser` as demonstrated below:
```
from HTMLParser import HTMLParser
s = \
"""
<html>
<p>Hi Team from the following Server :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:203pt">
<tbody>
<tr>
<td style="height:15.0pt; width:203pt">ratsuite.sby.ibm.com</td>
</tr>
</tbody>
</table>
<p> </p>
<p>Please archive the following Project Areas :</p>
<table border="0" cellpadding="0" cellspacing="0" style="width:1436pt">
<tbody>
<tr>
<td style="height:15.0pt; width:505pt">UNIT TEST - IBM OPAL 3.3 RC3</td>
<td style="width:328pt">https://ratsuite.sby.ibm.com:9460/ccm</td>
<td style="width:603pt">https://ratsuite.sby.ibm.com:9460/ccm/process/project-areas/_ckR-QJiUEeOXmZKjKhPE4Q</td>
</tr>
</tbody>
</table>
</html>
"""
# create a subclass and override the handler methods
class MyHTMLParser(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self._last_tag = ''
def handle_starttag(self, tag, attrs):
#print "Encountered a start tag:", tag
self._last_tag = tag
def handle_endtag(self, tag):
#print "Encountered an end tag :", tag
self._last_tag = ''
def handle_data(self, data):
#print "Encountered some data :", data
if self._last_tag == 'p':
print("<%s> tag data: %s" % (self._last_tag, data))
# instantiate the parser and fed it some HTML
parser = MyHTMLParser()
parser.feed(s)
```
Output:
```
<p> tag data: Hi Team from the following Server :
<p> tag data: Please archive the following Project Areas :
``` | 14 |
38,776,104 | I would like to redirect the standard error and standard output of a Python script to the same output file. From the terminal I could use
```
$ python myfile.py &> out.txt
```
to do the same task that I want, but I need to do it from the Python script itself.
I looked into the questions [Redirect subprocess stderr to stdout](https://stackoverflow.com/questions/11495783/redirect-subprocess-stderr-to-stdout), [How to redirect stderr in Python?](https://stackoverflow.com/questions/1956142/how-to-redirect-stderr-in-python), and Example 10.10 from [here](http://www.diveintopython.net/scripts_and_streams/stdin_stdout_stderr.html), and then I tried the following:
```
import sys
fsock = open('out.txt', 'w')
sys.stdout = sys.stderr = fsock
print "a"
```
which rightly prints the letter "a" in the file out.txt; however, when I try the following:
```
import sys
fsock = open('out.txt', 'w')
sys.stdout = sys.stderr = fsock
print "a # missing end quote, will give error
```
I get the error message "SyntaxError ..." on the terminal, but not in the file out.txt. What do I need to do to send the SyntaxError to the file out.txt? I do not want to write an Exception, because in that case I have to write too many Exceptions in the script. I am using Python 2.7.
Update: As pointed out in the answers and comments below, that SyntaxError will always output to screen, I replaced the line
```
print "a # missing end quote, will give error
```
by
```
print 1/0 # Zero division error
```
The ZeroDivisionError is output to file, as I wanted to have it in my question. | 2016/08/04 | [
"https://Stackoverflow.com/questions/38776104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1461999/"
] | This works
```
sys.stdout = open('out.log', 'w')
sys.stderr = sys.stdout
``` | A SyntaxError in a Python file like the above is raised before your program even begins to run:
Python files are compiled just like in any other compiled language - if the parser or compiler can't find sense in your Python file, no executable bytecode is generated, therefore the program does not run.
The correct way to have an exception generated on purpose in your code - from simple test cases like yours, up to implementing complex flow control patterns, is to use the Pyton command `raise`.
Just leave your print there, and a line like this at the end:
```
raise Exception
```
Then you can see that your trick will work.
Your program could fail in runtime in many other ways without an explict raise, like, if you force a division by 0, or simply try to use an unassigned (and therefore "undeclared") variable - but a deliberate SyntaxError will have the effect that the program never runs to start with - not even the first few lines. | 17 |
57,843,695 | I haven't changed my system configuration, But I'm spotting this error for the first time today.
I've reported it here: <https://github.com/jupyter/notebook/issues/4871>
```
> jupyter notebook
[I 10:44:20.102 NotebookApp] JupyterLab extension loaded from /usr/local/anaconda3/lib/python3.7/site-packages/jupyterlab
[I 10:44:20.102 NotebookApp] JupyterLab application directory is /usr/local/anaconda3/share/jupyter/lab
[I 10:44:20.104 NotebookApp] Serving notebooks from local directory: /Users/pi
[I 10:44:20.104 NotebookApp] The Jupyter Notebook is running at:
[I 10:44:20.104 NotebookApp] http://localhost:8888/?token=586797fb9049c0faea24f2583c4de32c08d45c89051fb07d
[I 10:44:20.104 NotebookApp] or http://127.0.0.1:8888/?token=586797fb9049c0faea24f2583c4de32c08d45c89051fb07d
[I 10:44:20.104 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 10:44:20.110 NotebookApp]
To access the notebook, open this file in a browser:
file:///Users/pi/Library/Jupyter/runtime/nbserver-65385-open.html
Or copy and paste one of these URLs:
http://localhost:8888/?token=586797fb9049c0faea24f2583c4de32c08d45c89051fb07d
or http://127.0.0.1:8888/?token=586797fb9049c0faea24f2583c4de32c08d45c89051fb07d
[E 10:44:21.457 NotebookApp] Could not open static file ''
[W 10:44:21.512 NotebookApp] 404 GET /static/components/react/react-dom.production.min.js (::1) 9.02ms referer=http://localhost:8888/tree?token=BLA
[W 10:44:21.548 NotebookApp] 404 GET /static/components/react/react-dom.production.min.js (::1) 0.99ms referer=http://localhost:8888/tree?token=BLA
Set
```
Looks like this issue was fixed in `Jupyter 6.0.1`
So the question becomes: can I force-install `jupyter 6.0.1`?
As the initial question has now provoked a second question, I now ask this new question here: [How to force `conda` to install the latest version of `jupyter`?](https://stackoverflow.com/questions/57843733/how-to-force-conda-to-install-the-latest-version-of-jupyter)
Alternatively I can manually provide the missing file, but I'm not sure *where*. I've asked here: [Where does Jupyter install site-packages on macOS?](https://stackoverflow.com/questions/57843888/where-does-jupyter-install-site-packages-on-macos)
Research:
=========
<https://github.com/jupyter/notebook/pull/4772> *"add missing react-dom js to package data #4772"* on 6 Aug 2019
>
> minrk added this to the 6.0.1 milestone on 18 Jul
>
>
>
Ok, so can I get Jupyter Notebook 6.0.1?
`brew cask install anaconda` downloads `~/Library/Caches/Homebrew/downloads/{LONG HEX}--Anaconda3-2019.07-MacOSX-x86_64` which is July, and `conda --version` reports `conda 4.7.10`. But this is for `Anaconda` which is the Package *Manager*.
```
> conda list | grep jupy
jupyter 1.0.0 py37_7
jupyter_client 5.3.1 py_0
jupyter_console 6.0.0 py37_0
jupyter_core 4.5.0 py_0
jupyterlab 1.0.2 py37hf63ae98_0
jupyterlab_server 1.0.0 py_0
```
So that's a bit confusing. No `jupyter notebook` here.
```
> which jupyter
/usr/local/anaconda3/bin/jupyter
> jupyter --version
jupyter core : 4.5.0
jupyter-notebook : 6.0.0
qtconsole : 4.5.1
ipython : 7.6.1
ipykernel : 5.1.1
jupyter client : 5.3.1
jupyter lab : 1.0.2
nbconvert : 5.5.0
ipywidgets : 7.5.0
nbformat : 4.4.0
traitlets : 4.3.2
```
Ok, so it appears `jupyter-notebook` is in `jupyter` which is maintained by Anaconda.
Can we update this?
<https://jupyter.readthedocs.io/en/latest/projects/upgrade-notebook.html>
```
> conda update jupyter
:
```
Alas `jupyter --version` is still `6.0.0` | 2019/09/08 | [
"https://Stackoverflow.com/questions/57843695",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/435129/"
] | I fixed this by updating both jupyter on pip and pip3 (just to be safe) and this fixed the problem
using both
>
> `pip install --upgrade jupyter`
>
>
>
and
>
> `pip3 install --upgrade jupyter --no-cache-dir`
>
>
>
I believe you can do this in the terminal as well as in conda's terminal (since conda envs also have pip) | As per [Where does Jupyter install site-packages on macOS?](https://stackoverflow.com/questions/57843888/where-does-jupyter-install-site-packages-on-macos), I locate where on my system `jupyter` is searching for this missing file:
```
> find / -path '*/static/components' 2>/dev/null
/usr/local/anaconda3/pkgs/notebook-6.0.0-py37_0/lib/python3.7/site-packages/notebook/static/components
/usr/local/anaconda3/lib/python3.7/site-packages/notebook/static/components
```
And as per <https://github.com/jupyter/notebook/pull/4772#issuecomment-515794823>, if I download that file and deposit it in the second location, i.e. creating:
```
/usr/local/anaconda3/lib/python3.7/site-packages/notebook/static/components/react/react-dom.production.min.js
```
... now `jupyter notebook` launches without errors.
(*NOTE: Being cautious I have also copied it into the first location. But that doesn't seem to have any effect.*) | 18 |
44,175,800 | Simple question: given a string
```
string = "Word1 Word2 Word3 ... WordN"
```
is there a pythonic way to do this?
```
firstWord = string.split(" ")[0]
otherWords = string.split(" ")[1:]
```
Like an unpacking or something?
Thank you | 2017/05/25 | [
"https://Stackoverflow.com/questions/44175800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2131783/"
] | Since Python 3 and [PEP 3132](https://www.python.org/dev/peps/pep-3132/), you can use extended unpacking.
This way, you can unpack arbitrary string containing any number of words. The first will be stored into the variable `first`, and the others will belong to the list (possibly empty) `others`.
```
first, *others = string.split()
```
Also, note that default delimiter for `.split()` is a space, so you do not need to specify it explicitly. | From [Extended Iterable Unpacking](https://www.python.org/dev/peps/pep-3132/).
Many algorithms require splitting a sequence in a "first, rest" pair, if you're using Python2.x, you need to try this:
```
seq = string.split()
first, rest = seq[0], seq[1:]
```
and it is replaced by the cleaner and probably more efficient in `Python3.x`:
```
first, *rest = seq
```
For more complex unpacking patterns, the new syntax looks even cleaner, and the clumsy index handling is not necessary anymore. | 19 |
28,717,067 | I am trying to place a condition after the for loop. It will print the word available if the retrieved rows is not equal to zero, however if I would be entering a value which is not stored in my database, it will return a message. My problem here is that, if I'd be inputting value that isn't stored on my database, it would not go to the else statement. I'm new to this. What would be my mistake in this function?
```
def search(title):
query = "SELECT * FROM books WHERE title = %s"
entry = (title,)
try:
conn = mysql.connector.connect(user='root', password='', database='python_mysql') # connect to the database server
cursor = conn.cursor()
cursor.execute(query, entry)
rows = cursor.fetchall()
for row in rows:
if row != 0:
print('Available')
else:
print('No available copies of the said book in the library')
except Error as e:
print(e)
finally:
cursor.close()
conn.close()
def main():
title = input("Enter book title: ")
search(title)
if __name__ == '__main__':
main()
``` | 2015/02/25 | [
"https://Stackoverflow.com/questions/28717067",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4529171/"
] | Quite apart from the 0/NULL confusion, your logic is wrong. If there are no matching rows, you won't get a 0 as the value of a row; in fact you won't get any rows at all, and you will never even get into the for loop.
A much better way to do this would be simply run a COUNT query, get the single result with `fetchone()`, and check that directly.
```
query = "SELECT COUNT(*) FROM books WHERE title = %s"
entry = (title,)
try:
conn = mysql.connector.connect(user='root', password='', database='python_mysql') # connect to the database server
cursor = conn.cursor()
cursor.execute(query, entry)
result = cursor.fetchone()
if result != 0:
print('Available')
else:
print('No available copies of the said book in the library')
``` | In python you should check for `None` not `NULL`. In your code you can just check for object, if it is not None then control should go inside `if` otherwise `else` will be executed
```
for row in rows:
if row:
print('Available')
else:
print('No available copies of the said book in the library')
```
`UPDATE after auther edited the question:`
Now in for loop you should check for column value not the whole `row`. If your column name is suppose `quantity` then `if` statement should be like this
```
if row["quantity"] != 0:
``` | 20 |
65,995,857 | I'm quite new to coding and I'm working on a math problem in python.
To solve it, I would like to extract the first 7 numbers from a string of one hundred 50-digit number (take first 7 numbers, skip 43 numbers, and then take the first 7 again). The numbers aren't separated in any way (just one long string).
Then I want to sum up those fifty seven-digit numbers which I have extracted.
How can I do this?
(I have written this code, but it only takes the first digit, I don't know any stepping/slicing methods to make it seven)
```py
number = """3710728753390210279879799822083759024651013574025046376937677490007126481248969700780504170182605387432498619952474105947423330951305812372661730962991942213363574161572522430563301811072406154908250230675882075393461711719803104210475137780632466768926167069662363382013637841838368417873436172675728112879812849979408065481931592621691275889832738442742289174325203219235894228767964876702721893184745144573600130643909116721685684458871160315327670386486105843025439939619828917593665686757934951621764571418565606295021572231965867550793241933316490635246274190492910143244581382266334794475817892575867718337217661963751590579239728245598838407582035653253593990084026335689488301894586282278288018119938482628201427819413994056758715117009439035398664372827112653829987240784473053190104293586865155060062958648615320752733719591914205172558297169388870771546649911559348760353292171497005693854370070576826684624621495650076471787294438377604532826541087568284431911906346940378552177792951453612327252500029607107508256381565671088525835072145876576172410976447339110607218265236877223636045174237069058518606604482076212098132878607339694128114266041808683061932846081119106155694051268969251934325451728388641918047049293215058642563049483624672216484350762017279180399446930047329563406911573244438690812579451408905770622942919710792820955037687525678773091862540744969844508330393682126183363848253301546861961243487676812975343759465158038628759287849020152168555482871720121925776695478182833757993103614740356856449095527097864797581167263201004368978425535399209318374414978068609844840309812907779179908821879532736447567559084803087086987551392711854517078544161852424320693150332599594068957565367821070749269665376763262354472106979395067965269474259770973916669376304263398708541052684708299085211399427365734116182760315001271653786073615010808570091499395125570281987460043753582903531743471732693212357815498262974255273730794953759765105305946966067683156574377167401875275889028025717332296191766687138199318110487701902712526768027607800301367868099252546340106163286652636270218540497705585629946580636237993140746255962240744869082311749777923654662572469233228109171419143028819710328859780666976089293863828502533340334413065578016127815921815005561868836468420090470230530811728164304876237919698424872550366387845831148769693215490281042402013833512446218144177347063783299490636259666498587618221225225512486764533677201869716985443124195724099139590089523100588229554825530026352078153229679624948164195386821877476085327132285723110424803456124867697064507995236377742425354112916842768655389262050249103265729672370191327572567528565324825826546309220705859652229798860272258331913126375147341994889534765745501184957014548792889848568277260777137214037988797153829820378303147352772158034814451349137322665138134829543829199918180278916522431027392251122869539409579530664052326325380441000596549391598795936352974615218550237130764225512118369380358038858490341698116222072977186158236678424689157993532961922624679571944012690438771072750481023908955235974572318970677254791506150550495392297953090112996751986188088225875314529584099251203829009407770775672113067397083047244838165338735023408456470580773088295917476714036319800818712901187549131054712658197623331044818386269515456334926366572897563400500428462801835170705278318394258821455212272512503275512160354698120058176216521282765275169129689778932238195734329339946437501907836945765883352399886755061649651847751807381688378610915273579297013376217784275219262340194239963916804498399317331273132924185707147349566916674687634660915035914677504995186714302352196288948901024233251169136196266227326746080059154747183079839286853520694694454072476841822524674417161514036427982273348055556214818971426179103425986472045168939894221798260880768528778364618279934631376775430780936333301898264209010848802521674670883215120185883543223812876952786713296124747824645386369930090493103636197638780396218407357239979422340623539380833965132740801111666627891981488087797941876876144230030984490851411606618262936828367647447792391803351109890697907148578694408955299065364044742557608365997664579509666024396409905389607120198219976047599490197230297649139826800329731560371200413779037855660850892521673093931987275027546890690370753941304265231501194809377245048795150954100921645863754710598436791786391670211874924319957006419179697775990283006991536871371193661495281130587638027841075444973307840789923115535562561142322423255033685442488917353448899115014406480203690680639606723221932041495354150312888033953605329934036800697771065056663195481234880673210146739058568557934581403627822703280826165707739483275922328459417065250945123252306082291880205877731971983945018088807242966198081119777158542502016545090413245809786882778948721859617721078384350691861554356628840622574736922845095162084960398013400172393067166682355524525280460972253503534226472524250874054075591789781264330331690"""
first_digits = list(number[::50])
first_digits_int = list(map(int, first_digits))
result = 0
for n in first_digits_int:
result += n
print(result)
``` | 2021/02/01 | [
"https://Stackoverflow.com/questions/65995857",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15117090/"
] | Python allows you to iterate over a range with custom step sizes. So that should be allow you to do something like:
```py
your_list = []
for idx in range(0, len(string), 50): # Indexes 0, 50, 100, so on
first_seven_digits = string[idx:idx+7] # Say, "1234567"
str_to_int = int(first_seven_digits) # Converts to the number 1234567
your_list.append(str_to_int) # Add the number to the list
your_sum = sum(your_list) # Find the sum
```
You store the numbers made up of those first 7 digits in a list, and finally, sum them up. | first of all your number string is 4999 characters long so you'll have to add one. secondly if you want to use numpy you could make a 100 by 50 array by reshaping the original 5000 long array. like this
```
arr = np.array(list(number)).reshape(100, 50)
```
than you can slice the arr in a way that the first 7 elements the arrays second axis and all of the first. like this
```
nums = arr[:, :7]
```
than you can just construct your result list by iterating over every element of nums and joining all the chars to a list like so and sum there integers together
```
res = sum([int("".join(n)) for n in nums])
```
so if we putt all that together we get
```
import numpy as np
number = """37107287533902102798797998228083759024651013574025046376937677490007126481248969700780504170182605387432498619952474105947423330951305812372661730962991942213363574161572522430563301811072406154908250230675882075393461711719803104210475137780632466768926167069662363382013637841838368417873436172675728112879812849979408065481931592621691275889832738442742289174325203219235894228767964876702721893184745144573600130643909116721685684458871160315327670386486105843025439939619828917593665686757934951621764571418565606295021572231965867550793241933316490635246274190492910143244581382266334794475817892575867718337217661963751590579239728245598838407582035653253593990084026335689488301894586282278288018119938482628201427819413994056758715117009439035398664372827112653829987240784473053190104293586865155060062958648615320752733719591914205172558297169388870771546649911559348760353292171497005693854370070576826684624621495650076471787294438377604532826541087568284431911906346940378552177792951453612327252500029607107508256381565671088525835072145876576172410976447339110607218265236877223636045174237069058518606604482076212098132878607339694128114266041808683061932846081119106155694051268969251934325451728388641918047049293215058642563049483624672216484350762017279180399446930047329563406911573244438690812579451408905770622942919710792820955037687525678773091862540744969844508330393682126183363848253301546861961243487676812975343759465158038628759287849020152168555482871720121925776695478182833757993103614740356856449095527097864797581167263201004368978425535399209318374414978068609844840309812907779179908821879532736447567559084803087086987551392711854517078544161852424320693150332599594068957565367821070749269665376763262354472106979395067965269474259770973916669376304263398708541052684708299085211399427365734116182760315001271653786073615010808570091499395125570281987460043753582903531743471732693212357815498262974255273730794953759765105305946966067683156574377167401875275889028025717332296191766687138199318110487701902712526768027607800301367868099252546340106163286652636270218540497705585629946580636237993140746255962240744869082311749777923654662572469233228109171419143028819710328859780666976089293863828502533340334413065578016127815921815005561868836468420090470230530811728164304876237919698424872550366387845831148769693215490281042402013833512446218144177347063783299490636259666498587618221225225512486764533677201869716985443124195724099139590089523100588229554825530026352078153229679624948164195386821877476085327132285723110424803456124867697064507995236377742425354112916842768655389262050249103265729672370191327572567528565324825826546309220705859652229798860272258331913126375147341994889534765745501184957014548792889848568277260777137214037988797153829820378303147352772158034814451349137322665138134829543829199918180278916522431027392251122869539409579530664052326325380441000596549391598795936352974615218550237130764225512118369380358038858490341698116222072977186158236678424689157993532961922624679571944012690438771072750481023908955235974572318970677254791506150550495392297953090112996751986188088225875314529584099251203829009407770775672113067397083047244838165338735023408456470580773088295917476714036319800818712901187549131054712658197623331044818386269515456334926366572897563400500428462801835170705278318394258821455212272512503275512160354698120058176216521282765275169129689778932238195734329339946437501907836945765883352399886755061649651847751807381688378610915273579297013376217784275219262340194239963916804498399317331273132924185707147349566916674687634660915035914677504995186714302352196288948901024233251169136196266227326746080059154747183079839286853520694694454072476841822524674417161514036427982273348055556214818971426179103425986472045168939894221798260880768528778364618279934631376775430780936333301898264209010848802521674670883215120185883543223812876952786713296124747824645386369930090493103636197638780396218407357239979422340623539380833965132740801111666627891981488087797941876876144230030984490851411606618262936828367647447792391803351109890697907148578694408955299065364044742557608365997664579509666024396409905389607120198219976047599490197230297649139826800329731560371200413779037855660850892521673093931987275027546890690370753941304265231501194809377245048795150954100921645863754710598436791786391670211874924319957006419179697775990283006991536871371193661495281130587638027841075444973307840789923115535562561142322423255033685442488917353448899115014406480203690680639606723221932041495354150312888033953605329934036800697771065056663195481234880673210146739058568557934581403627822703280826165707739483275922328459417065250945123252306082291880205877731971983945018088807242966198081119777158542502016545090413245809786882778948721859617721078384350691861554356628840622574736922845095162084960398013400172393067166682355524525280460972253503534226472524250874054075591789781264330331690"""
arr = np.array(list(number)).reshape(100, 50)
nums = arr[:, :7]
res = sum([int("".join(n)) for n in nums])
print(res)
``` | 22 |
21,307,128 | Since I have to mock a static method, I am using **Power Mock** to test my application.
My application uses \**Camel 2.1*\*2.
I define routes in *XML* that is read by *camel-spring* context.
There were no issues when `Junit` alone was used for testing.
While using power mock, I get the error listed at the end of the post.
I have also listed the XML used.
*Camel* is unable to recognize any of its tags when power mock is used.
I wonder whether the byte-level manipulation done by power mock to mock static methods interferes with camel engine in some way. Let me know what could possibly be wrong.
PS:
The problem disappears if I do not use power mock.
+++++++++++++++++++++++++ Error +++++++++++++++++++++++++++++++++++++++++++++++++
```
[ main] CamelNamespaceHandler DEBUG Using org.apache.camel.spring.CamelContextFactoryBean as CamelContextBeanDefinitionParser
org.springframework.beans.factory.BeanDefinitionStoreException: Failed to parse JAXB element; nested exception is javax.xml.bind.UnmarshalException: unexpected element (uri:"http://camel.apache.org/schema/spring", local:"camelContext"). Expected elements are <{}aggregate>,<{}aop>,<{}avro>,<{}base64>,<{}batchResequencerConfig>,<{}bean>,<{}beanPostProcessor>,<{}beanio>,<{}bindy>,<{}camelContext>,<{}castor>,<{}choice>,<{}constant>,<{}consumerTemplate>,<{}contextScan>,<{}convertBodyTo>,<{}crypto>,<{}csv>,<{}customDataFormat>,<{}customLoadBalancer>,<{}dataFormats>,<{}delay>,<{}description>,<{}doCatch>,<{}doFinally>,<{}doTry>,<{}dynamicRouter>,<{}el>,<{}endpoint>,<{}enrich>,<{}errorHandler>,<{}export>,<{}expression>,<{}expressionDefinition>,<{}failover>,<{}filter>,<{}flatpack>,<{}from>,<{}groovy>,<{}gzip>,<{}header>,<{}hl7>,<{}idempotentConsumer>,<{}inOnly>,<{}inOut>,<{}intercept>,<{}interceptFrom>,<{}interceptToEndpoint>,<{}javaScript>,<{}jaxb>,<{}jibx>,<{}jmxAgent>,<{}json>,<{}jxpath>,<{}keyStoreParameters>,<{}language>,<{}loadBalance>,<{}log>,<{}loop>,<{}marshal>,<{}method>,<{}multicast>,<{}mvel>,<{}ognl>,<{}onCompletion>,<{}onException>,<{}optimisticLockRetryPolicy>,<{}otherwise>,<{}packageScan>,<{}pgp>,<{}php>,<{}pipeline>,<{}policy>,<{}pollEnrich>,<{}process>,<{}properties>,<{}property>,<{}propertyPlaceholder>,<{}protobuf>,<{}proxy>,<{}python>,<{}random>,<{}recipientList>,<{}redeliveryPolicy>,<{}redeliveryPolicyProfile>,<{}ref>,<{}removeHeader>,<{}removeHeaders>,<{}removeProperty>,<{}resequence>,<{}rollback>,<{}roundRobin>,<{}route>,<{}routeBuilder>,<{}routeContext>,<{}routeContextRef>,<{}routes>,<{}routingSlip>,<{}rss>,<{}ruby>,<{}sample>,<{}secureRandomParameters>,<{}secureXML>,<{}serialization>,<{}setBody>,<{}setExchangePattern>,<{}setFaultBody>,<{}setHeader>,<{}setOutHeader>,<{}setProperty>,<{}simple>,<{}soapjaxb>,<{}sort>,<{}spel>,<{}split>,<{}sql>,<{}sslContextParameters>,<{}sticky>,<{}stop>,<{}streamCaching>,<{}streamResequencerConfig>,<{}string>,<{}syslog>,<{}template>,<{}threadPool>,<{}threadPoolProfile>,<{}threads>,<{}throttle>,<{}throwException>,<{}tidyMarkup>,<{}to>,<{}tokenize>,<{}topic>,<{}transacted>,<{}transform>,<{}unmarshal>,<{}validate>,<{}vtdxml>,<{}weighted>,<{}when>,<{}wireTap>,<{}xmlBeans>,<{}xmljson>,<{}xmlrpc>,<{}xpath>,<{}xquery>,<{}xstream>,<{}zip>,<{}zipFile> at org.apache.camel.spring.handler.CamelNamespaceHandler.parseUsingJaxb(CamelNamespaceHandler.java:169)
at org.apache.camel.spring.handler.CamelNamespaceHandler$CamelContextBeanDefinitionParser.doParse(CamelNamespaceHandler.java:307)
at org.springframework.beans.factory.xml.AbstractSingleBeanDefinitionParser.parseInternal(AbstractSingleBeanDefinitionParser.java:85)
at org.springframework.beans.factory.xml.AbstractBeanDefinitionParser.parse(AbstractBeanDefinitionParser.java:59)
at org.springframework.beans.factory.xml.NamespaceHandlerSupport.parse(NamespaceHandlerSupport.java:73)
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1438)
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1428)
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:185)
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:139)
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:108)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:493)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:390)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:334)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:302)
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:174)
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:209)
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:180)
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:243)
at org.springframework.context.support.AbstractXmlApplicationContext.loadBeanDefinitions(AbstractXmlApplicationContext.java:127)
at org.springframework.context.support.AbstractXmlApplicationContext.loadBeanDefinitions(AbstractXmlApplicationContext.java:93)
at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:130)
at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:537)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:451)
at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:139)
at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:83)
at org.apache.camel.spring.SpringCamelContext.springCamelContext(SpringCamelContext.java:100)
at com.ericsson.bss.edm.integrationFramework.Context.<init>(Context.java:50)
at com.ericsson.bss.edm.integrationFramework.RouteEngine.main(RouteEngine.java:55)
at com.ericsson.bss.edm.integrationFramework.RouteEngineTest.testMultiRouteCondition(RouteEngineTest.java:174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:66)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:312)
at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:86)
at org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:94)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:296)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:112)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:73)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:284)
at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:84)
at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:49)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:209)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:148)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:122)
at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
at org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:102)
at org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53)
at org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:42)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:24)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
at org.junit.runner.JUnitCore.run(JUnitCore.java:136)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:62)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Caused by: javax.xml.bind.UnmarshalException: unexpected element (uri:"http://camel.apache.org/schema/spring", local:"camelContext"). Expected elements are <{}aggregate>,<{}aop>,<{}avro>,<{}base64>,<{}batchResequencerConfig>,<{}bean>,<{}beanPostProcessor>,<{}beanio>,<{}bindy>,<{}camelContext>,<{}castor>,<{}choice>,<{}constant>,<{}consumerTemplate>,<{}contextScan>,<{}convertBodyTo>,<{}crypto>,<{}csv>,<{}customDataFormat>,<{}customLoadBalancer>,<{}dataFormats>,<{}delay>,<{}description>,<{}doCatch>,<{}doFinally>,<{}doTry>,<{}dynamicRouter>,<{}el>,<{}endpoint>,<{}enrich>,<{}errorHandler>,<{}export>,<{}expression>,<{}expressionDefinition>,<{}failover>,<{}filter>,<{}flatpack>,<{}from>,<{}groovy>,<{}gzip>,<{}header>,<{}hl7>,<{}idempotentConsumer>,<{}inOnly>,<{}inOut>,<{}intercept>,<{}interceptFrom>,<{}interceptToEndpoint>,<{}javaScript>,<{}jaxb>,<{}jibx>,<{}jmxAgent>,<{}json>,<{}jxpath>,<{}keyStoreParameters>,<{}language>,<{}loadBalance>,<{}log>,<{}loop>,<{}marshal>,<{}method>,<{}multicast>,<{}mvel>,<{}ognl>,<{}onCompletion>,<{}onException>,<{}optimisticLockRetryPolicy>,<{}otherwise>,<{}packageScan>,<{}pgp>,<{}php>,<{}pipeline>,<{}policy>,<{}pollEnrich>,<{}process>,<{}properties>,<{}property>,<{}propertyPlaceholder>,<{}protobuf>,<{}proxy>,<{}python>,<{}random>,<{}recipientList>,<{}redeliveryPolicy>,<{}redeliveryPolicyProfile>,<{}ref>,<{}removeHeader>,<{}removeHeaders>,<{}removeProperty>,<{}resequence>,<{}rollback>,<{}roundRobin>,<{}route>,<{}routeBuilder>,<{}routeContext>,<{}routeContextRef>,<{}routes>,<{}routingSlip>,<{}rss>,<{}ruby>,<{}sample>,<{}secureRandomParameters>,<{}secureXML>,<{}serialization>,<{}setBody>,<{}setExchangePattern>,<{}setFaultBody>,<{}setHeader>,<{}setOutHeader>,<{}setProperty>,<{}simple>,<{}soapjaxb>,<{}sort>,<{}spel>,<{}split>,<{}sql>,<{}sslContextParameters>,<{}sticky>,<{}stop>,<{}streamCaching>,<{}streamResequencerConfig>,<{}string>,<{}syslog>,<{}template>,<{}threadPool>,<{}threadPoolProfile>,<{}threads>,<{}throttle>,<{}throwException>,<{}tidyMarkup>,<{}to>,<{}tokenize>,<{}topic>,<{}transacted>,<{}transform>,<{}unmarshal>,<{}validate>,<{}vtdxml>,<{}weighted>,<{}when>,<{}wireTap>,<{}xmlBeans>,<{}xmljson>,<{}xmlrpc>,<{}xpath>,<{}xquery>,<{}xstream>,<{}zip>,<{}zipFile>
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleEvent(UnmarshallingContext.java:647)
at com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:258)
at com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportError(Loader.java:253)
at com.sun.xml.bind.v2.runtime.unmarshaller.Loader.reportUnexpectedChildElement(Loader.java:120)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext$DefaultRootLoader.childElement(UnmarshallingContext.java:1052)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext._startElement(UnmarshallingContext.java:483)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.startElement(UnmarshallingContext.java:464)
at com.sun.xml.bind.v2.runtime.unmarshaller.InterningXmlVisitor.startElement(InterningXmlVisitor.java:75)
at com.sun.xml.bind.v2.runtime.unmarshaller.SAXConnector.startElement(SAXConnector.java:152)
at com.sun.xml.bind.unmarshaller.DOMScanner.visit(DOMScanner.java:244)
at com.sun.xml.bind.unmarshaller.DOMScanner.scan(DOMScanner.java:127)
at com.sun.xml.bind.unmarshaller.DOMScanner.scan(DOMScanner.java:105)
at com.sun.xml.bind.v2.runtime.BinderImpl.associativeUnmarshal(BinderImpl.java:161)
at com.sun.xml.bind.v2.runtime.BinderImpl.unmarshal(BinderImpl.java:132)
at org.apache.camel.spring.handler.CamelNamespaceHandler.parseUsingJaxb(CamelNamespaceHandler.java:167)
... 72 more
```
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++ Route.xml +++++++++++++++++++++++++++++++++++++++++++++
```
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://camel.apache.org/schema/spring
http://camel.apache.org/schema/spring/camel-spring.xsd">
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route id="simpleroute">
<from uri="ftp://admin@x.y.z.a:2121/?password=admin&noop=true&maximumReconnectAttempts=3&download=false&delay=2000&throwExceptionOnConnectFailed=true;"/>
<to uri="file:/home/emeensa/NetBeansProjects/CamelFileCopier/output" />
</route>
</camelContext>
</beans>
```
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ | 2014/01/23 | [
"https://Stackoverflow.com/questions/21307128",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2345966/"
] | This error message usually means that your specified truststore can not be read. What I would check:
* Is the path correct? (I'm sure you checked this...)
* Has the user who started the JVM enough access privileges to read the
trustore?
* When do you set the system properties? Are they already set when the webservice is invoked?
* Perhaps another component has overridden the values. Are the system properties still set when the webservice is invoked?
* Does the trustore contains the Salesforce certificate and is the file not corrupt (e.g. check with `keytool -list`)?
**Edit:**
* Don't use `System.setProperty` but set the options when starting the Java process with `-Djavax.net.ssl.XXX`. The reason for this advice is as follows: The IBM security framework may read the options **before** you set the property (e.g. in a `static` block of a class). Of course this is framework specific and may change from version to version. | ```
Caused by: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
```
>
> * In my case, I have 2 duplicate Java installations (OpenJDK and
> JDK-17).
> * I installed JDK-17 after configuring environment variable for OpenJDK and before uninstalling OpenJDK.
> * So, maybe that is the problem.
>
>
>
This is how I SOLVED it **in my case:**
* First, I have completely removed openJDK and JDK-17 from my computer (including JDK-17/lib/security/cacerts).
* Then, I deleted the java environment variable and restarted the computer.
* Next, I thoroughly checked that there aren't any JDKs on the computer anymore.
* Finally, I just reinstalled JDK-17 (JDK-17/lib/security/cacerts is default). And it worked fine for me.
**Note:** kill any Java runtime tasks before uninstalling them. | 23 |
49,059,660 | I am looking for a simple way to constantly monitor a log file, and send me an email notification every time thhis log file has changed (new lines have been added to it).
The system runs on a Raspberry Pi 2 (OS Raspbian /Debian Stretch) and the log monitors a GPIO python script running as daemon.
I need something very simple and lightweight, don't even care to have the text of the new log entry, because I know what it says, it is always the same. 24 lines of text at the end.
Also, the log.txt file gets recreated every day at midnight, so that might represent another issue.
I already have a working python script to send me a simple email via gmail (called it sendmail.py)
What I tried so far was creating and running the following bash script:
monitorlog.sh
`#!/bin/bash
tail -F log.txt | python ./sendmail.py`
The problem is that it just sends an email every time I execute it, but when the log actually changes, it just quits.
I am really new to linux so apologies if I missed something.
Cheers | 2018/03/01 | [
"https://Stackoverflow.com/questions/49059660",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9431262/"
] | You asked for simple:
```
#!/bin/bash
cur_line_count="$(wc -l myfile.txt)"
while true
do
new_line_count="$(wc -l myfile.txt)"
if [ "$cur_line_count" != "$new_line_count" ]
then
python ./sendmail.py
fi
cur_line_count="$new_line_count"
sleep 5
done
``` | I've done this a bunch of different ways. If you run a cron job every minute that counts the number of lines (wc -l) compares that to a stored count (e.g. in /tmp/myfilecounter) and sends the emails when the numbers are different.
If you have inotify, there are more direct ways to get "woken up" when the file changes, e.g <https://serverfault.com/a/780522/97447> or <https://serverfault.com/search?q=inotifywait>.
If you don't mind adding a package to the system, incron is a very convenient way to run a script whenever a file or directory is modified, and it looks like it's supported on raspbian (internally it uses inotify). <https://www.linux.com/learn/how-use-incron-monitor-important-files-and-folders>. Looks like it's as simple as:
```
sudo apt-get install incron
sudo vi /etc/incron.allow # Add your userid to this file (or just rm /etc/incron.allow to let everyone use incron)
incron -e # Add the following line to the "cron" file
/path/to/log.txt IN_MODIFY python ./sendmail.py
```
And you'd be done! | 24 |
56,794,886 | guys! So I recently started learning about python classes and objects.
For instance, I have a following list of strings:
```
alist = ["Four", "Three", "Five", "One", "Two"]
```
Which is comparable to a class of Numbers I have:
```
class Numbers(object):
One=1
Two=2
Three=3
Four=4
Five=5
```
How could I convert `alist` into
```
alist = [4, 3, 5, 1, 2]
```
based on the class above?
My initial thought was to create a new (empty) list and use a `for loop` that adds the corresponding object value (e.g. `Numbers.One`) to the empty list as it goes through `alist`. But I'm unsure whether that'd be the most efficient solution.
Therefore, I was wondering if there was a simpler way of completing this task using Python Classes / Inheritance.
I hope someone can help me and explain to me what way would work better and why!
Thank you!! | 2019/06/27 | [
"https://Stackoverflow.com/questions/56794886",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10713538/"
] | If you are set on using the class, one way would be to use [`__getattribute__()`](https://docs.python.org/3/reference/datamodel.html#object.__getattribute__)
```
print([Numbers().__getattribute__(a) for a in alist])
#[4, 3, 5, 1, 2]
```
But a much better (and more pythonic IMO) way would be to use a `dict`:
```
NumbersDict = dict(
One=1,
Two=2,
Three=3,
Four=4,
Five=5
)
print([NumbersDict[a] for a in alist])
#[4, 3, 5, 1, 2]
``` | **EDIT:** I suppose that the words and numbers are just a trivial example, a dictionary is the right way to do it if that's not the case as written in the comments.
Your assumptions are correct - either create an empty list and populate it using for loop, or use list comprehension with a for loop to create a new list with the required elements.
Empty list with for loop
========================
```py
#... Numbers class defined above
alist = ["Four", "Three", "Five", "One", "Two"]
nlist = []
numbers = Numbers()
for anumber in alist:
nlist.append(getattr(numbers, anumber))
print(nlist)
[4, 3, 5, 1, 2]
```
List comprehension with for loop
================================
```py
#... Numbers class defined above
alist = ["Four", "Three", "Five", "One", "Two"]
numbers = Numbers()
nlist = [getattr(numbers, anumber) for anumber in alist]
print(nlist)
[4, 3, 5, 1, 2]
``` | 25 |
36,108,377 | I want to count the number of times a word is being repeated in the review string
I am reading the csv file and storing it in a python dataframe using the below line
```
reviews = pd.read_csv("amazon_baby.csv")
```
The code in the below lines work when I apply it to a single review.
```
print reviews["review"][1]
a = reviews["review"][1].split("disappointed")
print a
b = len(a)
print b
```
The output for the above lines were
```
it came early and was not disappointed. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.
['it came early and was not ', '. i love planet wise bags and now my wipe holder. it keps my osocozy wipes moist and does not leak. highly recommend it.']
2
```
When I apply the same logic to the entire dataframe using the below line. I receive an error message
```
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
```
Error message:
```
Traceback (most recent call last):
File "C:/Users/gouta/PycharmProjects/MLCourse1/Classifier.py", line 12, in <module>
reviews['disappointed'] = len(reviews["review"].split("disappointed"))-1
File "C:\Users\gouta\Anaconda2\lib\site-packages\pandas\core\generic.py", line 2360, in __getattr__
(type(self).__name__, name))
AttributeError: 'Series' object has no attribute 'split'
``` | 2016/03/19 | [
"https://Stackoverflow.com/questions/36108377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2861976/"
] | You're trying to split the entire review column of the data frame (which is the Series mentioned in the error message). What you want to do is apply a function to each row of the data frame, which you can do by calling [apply](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html) on the data frame:
```
f = lambda x: len(x["review"].split("disappointed")) -1
reviews["disappointed"] = reviews.apply(f, axis=1)
``` | Well, the problem is with:
```
reviews["review"]
```
The above is a Series. In your first snippet, you are doing this:
```
reviews["review"][1].split("disappointed")
```
That is, you are putting an index for the review. You could try looping over all rows of the column and perform your desired action. For example:
```
for index, row in reviews.iterrows():
print len(row['review'].split("disappointed"))
``` | 28 |
72,329,252 | Let's say we have following list. This list contains response times of a REST server in a traffic run.
[1, 2, 3, 3, 4, 5, 6, 7, 9, 1]
I need following output
Percentage of the requests served within a certain time (ms)
50% 3
60% 4
70% 5
80% 6
90% 7
100% 9
How can we get it done in python? This is apache bench kind of output. So basically lets say at 50%, we need to find point in list below which 50% of the list elements are present and so on. | 2022/05/21 | [
"https://Stackoverflow.com/questions/72329252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4137009/"
] | You can try something like this:
```
responseTimes = [1, 2, 3, 3, 4, 5, 6, 7, 9, 1]
for time in range(3,10):
percentage = len([x for x in responseTimes if x <= time])/(len(responseTimes))
print(f'{percentage*100}%')
```
>
> *"So basically lets say at 50%, we need to find point in list below which 50% of the list elements are present and so on"*
>
>
>
```
responseTimes = [1, 2, 3, 3, 4, 5, 6, 7, 9, 1]
percentage = 0
time = 0
while(percentage <= 0.5):
percentage = len([x for x in responseTimes if x <= time])/(len(responseTimes))
time+=1
print(f'Every time under {time}(ms) occurrs lower than 50% of the time')
``` | You basically need to compute the cumulative ratio of the sorted response times.
```py
from collections import Counter
values = [1, 2, 3, 3, 4, 5, 6, 7, 9, 1]
frequency = Counter(values) # {1: 2, 2: 1, 3: 2, ...}
total = 0
n = len(values)
for time in sorted(frequency):
total += frequency[time]
print(time, f'{100*total/n}%')
```
This will print all times with the corresponding ratios.
```py
1 20.0%
2 30.0%
3 50.0%
4 60.0%
5 70.0%
6 80.0%
7 90.0%
9 100.0%
``` | 33 |
50,239,640 | In python have three one dimensional arrays of different shapes (like the ones given below)
```
a0 = np.array([5,6,7,8,9])
a1 = np.array([1,2,3,4])
a2 = np.array([11,12])
```
I am assuming that the array `a0` corresponds to an index `i=0`, `a1` corresponds to index `i=1` and `a2` corresponds to `i=2`. With these assumptions I want to construct a new two dimensional array where the rows would correspond to indices of the arrays (`i=0,1,2`) and the columns would be entries of the arrays `a0, a1, a2`.
In the example that I have given here, I will like the two dimensional array to look like
```
result = np.array([ [0,5], [0,6], [0,7], [0,8], [0,9], [1,1], [1,2],\
[1,3], [1,4], [2,11], [2,12] ])
```
I will very appreciate to have an answer as to how I can achieve this. In the actual problem that I am working with, I am dealing more than three one dimensional arrays. So, it will be very nice if the answer gives consideration to this. | 2018/05/08 | [
"https://Stackoverflow.com/questions/50239640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3761166/"
] | You can use `numpy` stack functions to speed up:
```
aa = [a0, a1, a2]
np.hstack(tuple(np.vstack((np.full(ai.shape, i), ai)) for i, ai in enumerate(aa))).T
``` | One way to do this would be a simple list comprehension:
```
result = np.array([[i, arr_v] for i, arr in enumerate([a0, a1, a2])
for arr_v in arr])
>>> result
array([[ 0, 5],
[ 0, 6],
[ 0, 7],
[ 0, 8],
[ 0, 9],
[ 1, 1],
[ 1, 2],
[ 1, 3],
[ 1, 4],
[ 2, 11],
[ 2, 12]])
```
Adressing your concern about scaling this to more arrays, you can easily add as many arrays as you wish by simply creating a list of your array names, and using that list as the argument to `enumerate`:
```
.... for i, arr in enumerate(my_list_of_arrays) ...
``` | 34 |
45,939,564 | I am accessing a python file via python.
The google sheets looks like the following:
[![enter image description here](https://i.stack.imgur.com/eIW7v.png)](https://i.stack.imgur.com/eIW7v.png)
But when I access it via:
```
self.probe=[]
self.scope = ['https://spreadsheets.google.com/feeds']
self.creds = ServiceAccountCredentials.from_json_keyfile_name('client_secret.json', self.scope)
self.client = gspread.authorize(self.creds)
self.sheet = self.client.open('Beziehende').sheet1
self.probe = self.sheet.get_all_records()
print(self.probe)
```
it results in [![enter image description here](https://i.stack.imgur.com/2tHia.png)](https://i.stack.imgur.com/2tHia.png)
Ho can I get the results in the same order as they are written in the google sheet?
Thank you for your help.
**Edit** Sorry, here are some more information. My program has two functions:
1.) It can check if a name / address etc. is already in the database. If the name is in the database, it prints all the information about that person.
2.) It lets me add people's information to the database.
**The Problem**: I am loading the whole database into the list and later writing it all back. But when writing it back, the order gets messed up, as the get\_all\_records stored it in a random order. (This is the very first program I have ever written by myself, so please forgive the bad coding).
I wanted to know if there is a possibility to get the data in order. but if not, than I just have to find a way, online to write the newest entry (which is probably more efficient anyway I guess...)
```
def create_window(self):
self.t = Toplevel(self)
self.t.geometry("250x150")
Message(self.t, text="Name", width=100, anchor=W).grid(row=1, column=1)
self.name_entry = Entry(self.t)
self.name_entry.grid(row=1, column=2)
Message(self.t, text="Adresse", width=100, anchor=W).grid(row=2, column=1)
self.adr_entry = Entry(self.t)
self.adr_entry.grid(row=2, column=2)
Message(self.t, text="Organisation", width=100, anchor=W).grid(row=3, column=1)
self.org_entry = Entry(self.t)
self.org_entry.grid(row=3, column=2)
Message(self.t, text="Datum", width=100, anchor=W).grid(row=4, column=1)
self.date_entry = Entry(self.t)
self.date_entry.grid(row=4, column=2)
self.t.button = Button(self.t, text="Speichern", command=self.verify).grid(row=5, column=2)
#name
#window = Toplevel(self.insert_window)
def verify(self):
self.ver = Toplevel(self)
self.ver.geometry("300x150")
self.ver.grid_columnconfigure(1, minsize=100)
Message(self.ver, text=self.name_entry.get(), width=100).grid(row=1, column=1)
Message(self.ver, text=self.adr_entry.get(), width=100).grid(row=2, column=1)
Message(self.ver, text=self.org_entry.get(), width=100).grid(row=3, column=1)
Message(self.ver, text=self.date_entry.get(), width=100).grid(row=4, column=1)
confirm_button=Button(self.ver, text='Bestätigen', command=self.data_insert).grid(row=4, column=1)
cancle_button=Button(self.ver, text='Abbrechen', command=self.ver.destroy).grid(row=4, column=2)
def data_insert(self):
new_dict = collections.OrderedDict()
new_dict['name'] = self.name_entry.get()
new_dict['adresse'] = self.adr_entry.get()
new_dict['organisation'] = self.org_entry.get()
new_dict['datum'] = self.date_entry.get()
print(new_dict)
self.probe.append(new_dict)
#self.sheet.update_acell('A4',new_dict['name'])
self.update_gsheet()
self.ver.destroy()
self.t.destroy()
def update_gsheet(self):
i = 2
for dic_object in self.probe:
j = 1
for category in dic_object:
self.sheet.update_cell(i,j,dic_object[category])
j += 1
i += 1
def search(self):
print(self.probe)
self.result = []
self.var = self.entry.get() #starting index better
self.search_algo()
self.outputtext.delete('1.0', END)
for dict in self.result:
print(dict['Name'], dict['Adresse'], dict['Organisation'])
self.outputtext.insert(END, dict['Name'] + '\n')
self.outputtext.insert(END, dict['Adresse']+ '\n')
self.outputtext.insert(END, dict['Organisation']+ '\n')
self.outputtext.insert(END, 'Erhalten am '+dict['Datum']+'\n'+'\n')
if not self.result:
self.outputtext.insert(END, 'Name not found')
return FALSE
return TRUE
def search_algo(self):
category = self.v.get()
print(category)
for dict_object in self.probe:
if dict_object[category] == self.var:
self.result.append(dict_object)
``` | 2017/08/29 | [
"https://Stackoverflow.com/questions/45939564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3554329/"
] | I'm not familiar with gspread, which appears to be a third-party client for the Google Sheets API, but it looks like you should be using [`get_all_values`](https://github.com/burnash/gspread#getting-all-values-from-a-worksheet-as-a-list-of-lists) rather than `get_all_records`. That will give you a list of lists, rather than a list of dicts. | Python dictionaries are unordered. There is the [OrderedDict](https://docs.python.org/3.6/library/collections.html#collections.OrderedDict) in collections, but hard to say more about what the best course of action should be without more insight into why you need this dictionary ordered... | 36 |
55,508,830 | In a virtual Env with Python 3.7.2, I am trying to run django's `python manage.py startap myapp` and I get this error:
```
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
django.core.exceptions.ImproperlyConfigured: SQLite 3.8.3 or later is required (found 3.8.2).
```
I'm running Ubuntu Trusty 14.04 Server.
How do I upgrade or update my sqlite version to >=3.8.3?
*I ran*
`$ apt list --installed | grep sqlite`
```
libaprutil1-dbd-sqlite3/trusty,now 1.5.3-1 amd64 [installed,automatic]
libdbd-sqlite3/trusty,now 0.9.0-2ubuntu2 amd64 [installed]
libsqlite3-0/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
libsqlite3-dev/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
python-pysqlite2/trusty,now 2.6.3-3 amd64 [installed]
python-pysqlite2-dbg/trusty,now 2.6.3-3 amd64 [installed]
sqlite3/trusty-updates,trusty-security,now 3.8.2-1ubuntu2.2 amd64 [installed]
```
*and*
`sudo apt install --only-upgrade libsqlite3-0`
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
libsqlite3-0 is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.
```
EDIT:
the `settings.py` is stock standard:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55508830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6154769/"
] | I've just been through this. I had to install a separate newer version of SQLite, from
<https://www.sqlite.org/download.html>
That is in /usr/local/bin. Then I had to recompile Python, telling it to look there:
```
sudo LD_RUN_PATH=/usr/local/lib ./configure --enable-optimizations
sudo LD_RUN_PATH=/usr/local/lib make altinstall
```
To check which version of SQLite Python is using:
```
$ python
Python 3.7.3 (default, Apr 12 2019, 16:23:13)
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.27.2'
``` | In addition to the above mentioned answers, just in case if you experience this behaviour on Travis CI, add `dist: xenial` directive to fix it. | 37 |
46,143,091 | I'm pretty new to python so it's a basic question.
I have data that I imported from a csv file. Each row reflects a person and his data. Two attributes are Sex and Pclass. I want to add a new column (predictions) that is fully depended on those two in one line. If both attributes' values are 1 it should assign 1 to the person's predictions data field, 0 otherwise.
How do I do it in one line (let's say with Pandas)? | 2017/09/10 | [
"https://Stackoverflow.com/questions/46143091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5252187/"
] | You could try adding a composite index
```
create index test on screenshot (DateTaken, id)
``` | Try running this query:
```
SELECT COUNT(*) as total
FROM screenshot
WHERE DateTaken BETWEEN '2000-05-01' AND '2000-06-10';
```
The reference to `ID` in the `SELECT` could be affecting the use of the index. | 45 |
71,568,396 | We are using a beam multi-language pipeline using python and java(ref <https://beam.apache.org/documentation/sdks/python-multi-language-pipelines/>). We are creating a cross-language pipeline using java. We have some external jar files that required a java library path. Code gets compiled properly and is able to create a jar file. When I run the jar file it creates a Grpc server but when I use the python pipeline to call External transform it is not picking up the java library path it picks the default java library path.
![jni_emdq required library path to overwrite](https://i.stack.imgur.com/N24DB.png)
Tried -Djava.library.path=<path\_to\_dll> while running jar file.
Tried System.setProperty(“java.library.path”, “/path/to/library”).
(Ref <https://examples.javacodegeeks.com/java-library-path-what-is-java-library-and-how-to-use/>)
Tried JvmInitializer of beam to overwrite system property. (Ref <https://examples.javacodegeeks.com/java-library-path-what-is-java-library-and-how-to-use/>)
Tried to pull code beam open source and tried to overwrite system proprty before expansion starts. It overwrite but it is not picking correct java path when calls using python external transform. (ref <https://github.com/apache/beam/blob/master/sdks/java/expansion-service/src/main/java/org/apache/beam/sdk/expansion/service/ExpansionService.java>) | 2022/03/22 | [
"https://Stackoverflow.com/questions/71568396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9648514/"
] | A Worksheet Change Event: Monitor Change in Column's Data
---------------------------------------------------------
* I personally would go with JvdV's suggestion in the comments.
* On each manual change of a cell, e.g. in column `A`, it will check the formula
`=SUM(A2:ALastRow)` in cell `A1` and if it is not correct it will overwrite it with the correct one.
* You can use this for multiple non-adjacent columns e.g. `"A,C:D,E"`.
* Nothing needs to be run. Just copy the code into the appropriate sheet module e.g. `Sheet1` and exit the Visual Basic Editor.
**Sheet Module e.g. `Sheet1` (not Standard Module e.g. `Module1`)**
```
Option Explicit
Private Sub Worksheet_Change(ByVal Target As Range)
UpdateFirstRowFormula Target, "A"
End Sub
Private Sub UpdateFirstRowFormula( _
ByVal Target As Range, _
ByVal ColumnList As String)
On Error GoTo ClearError
Dim ws As Worksheet: Set ws = Target.Worksheet
Dim Cols() As String: Cols = Split(ColumnList, ",")
Application.EnableEvents = False
Dim irg As Range, arg As Range, crg As Range, lCell As Range
Dim n As Long
Dim Formula As String
For n = 0 To UBound(Cols)
With ws.Columns(Cols(n))
With .Resize(.Rows.Count - 1).Offset(1)
Set irg = Intersect(.Cells, Target.EntireColumn)
End With
End With
If Not irg Is Nothing Then
For Each arg In irg.Areas
For Each crg In arg.Columns
Set lCell = crg.Find("*", , xlFormulas, , , xlPrevious)
If Not lCell Is Nothing Then
Formula = "=SUM(" & crg.Cells(1).Address(0, 0) & ":" _
& lCell.Address(0, 0) & ")"
With crg.Cells(1).Offset(-1)
If .Formula <> Formula Then .Formula = Formula
End With
End If
Next crg
Next arg
Set irg = Nothing
End If
Next n
SafeExit:
If Not Application.EnableEvents Then Application.EnableEvents = True
Exit Sub
ClearError:
Debug.Print "Run-time error '" & Err.Number & "': " & Err.Description
Resume SafeExit
End Sub
``` | Use a nested function as below:
=SUM(OFFSET(A2,,,COUNTA(A2:A26))) | 47 |
49,005,651 | This question is motivated by my another question: [How to await in cdef?](https://stackoverflow.com/questions/48989065/how-to-await-in-cdef)
There are tons of articles and blog posts on the web about `asyncio`, but they are all very superficial. I couldn't find any information about how `asyncio` is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
1. Procedure definitions of the form `async def foo(): ...` are actually interpreted as methods of a class inheriting `coroutine`.
2. Perhaps, `async def` is actually split into multiple methods by `await` statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
3. If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
4. The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the `await` statement).
In other words, here's my attempt at "desugaring" of some `asyncio` syntax into something more understandable:
```
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
```
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C `open()` procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49005651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5691066/"
] | It all boils down to the two main challenges that asyncio is addressing:
* How to perform multiple I/O in a single thread?
* How to implement cooperative multitasking?
The answer to the first point has been around for a long while and is called a [select loop](https://en.wikipedia.org/wiki/Asynchronous_I/O#Select(/poll)_loops). In python, it is implemented in the [selectors module](https://docs.python.org/3/library/selectors.html).
The second question is related to the concept of [coroutine](https://en.wikipedia.org/wiki/Coroutine), i.e. functions that can stop their execution and be restored later on. In python, coroutines are implemented using [generators](https://wiki.python.org/moin/Generators) and the [yield from](https://www.python.org/dev/peps/pep-0380/) statement. That's what is hiding behind the [async/await syntax](https://www.python.org/dev/peps/pep-0492/).
More resources in this [answer](https://stackoverflow.com/a/41208685/2846140).
---
**EDIT:** Addressing your comment about goroutines:
The closest equivalent to a goroutine in asyncio is actually not a coroutine but a task (see the difference in the [documentation](https://docs.python.org/3/library/asyncio-task.html)). In python, a coroutine (or a generator) knows nothing about the concepts of event loop or I/O. It simply is a function that can stop its execution using `yield` while keeping its current state, so it can be restored later on. The `yield from` syntax allows for chaining them in a transparent way.
Now, within an asyncio task, the coroutine at the very bottom of the chain always ends up yielding a [future](https://docs.python.org/3.4/library/asyncio-task.html#asyncio.Future). This future then bubbles up to the event loop, and gets integrated into the inner machinery. When the future is set to done by some other inner callback, the event loop can restore the task by sending the future back into the coroutine chain.
---
**EDIT:** Addressing some of the questions in your post:
>
> How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter?
>
>
>
No, nothing happens in a thread. I/O is always managed by the event loop, mostly through file descriptors. However the registration of those file descriptors is usually hidden by high-level coroutines, making the dirty work for you.
>
> What exactly is meant by I/O? If my python procedure called C open() procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening?
>
>
>
An I/O is any blocking call. In asyncio, all the I/O operations should go through the event loop, because as you said, the event loop has no way to be aware that a blocking call is being performed in some synchronous code. That means you're not supposed to use a synchronous `open` within the context of a coroutine. Instead, use a dedicated library such [aiofiles](https://github.com/Tinche/aiofiles) which provides an asynchronous version of `open`. | If you picture an airport control tower, with many planes waiting to land on the same runway. The control tower can be seen as the event loop and runway as the thread. Each plane is a separate function waiting to execute. In reality only one plane can land on the runway at a time. What asyncio basically does it allows many planes to land simultaneously on the same runway by using the event loop to suspend functions and allow other functions to run when you use the await syntax it basically means that plane(function can be suspended and allow other functions to process | 48 |
36,590,875 | How to obtain absolute path via relative path for 'other' project files, not those python file in the project, like Java?
```
D:\Workspaces\ABCPythonProject\
|- src
| |-- com/abc
| |-- conf.py
| |-- abcd.py
| |-- defg.py
| |-- installation.rst
|- resources
| |-- a.txt
| |-- b.txt
| |-- c.jpg
```
For example, I would like access 'a.txt' or 'b.txt' in python codes like 'abcd.py' in a simple manner with variable like 'resource/a.txt', just like a Java project in Java.
In short, I want to get '**D:\Workspaces\ABCPythonProject\resources\a.txt**' by '**resources\a.txt**', which is extremely easy to do in Java, but is seemingly extremely difficult to achieve in Python.
(If I use the built-in python methods like 'os.filePath.join(os.filePath.dirname(\_file\_\_), 'resources/a.txt')', os.path.dirname('resources/a.txt'), os.path.abspath('resources/a.txt'), ..., etc., the results is always "**D:\Workspaces\ABCPythonProject\com\abc\resources\a.txt**", a non-exit file path. )
How to achieve this? | 2016/04/13 | [
"https://Stackoverflow.com/questions/36590875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1762932/"
] | For images you'll have to use:
```
<img src="url">
``` | It should be in following way,
```
foreach ($pdo->query($sql) as $row) {
echo '<tr>';
echo '<td>'. $row['u_id'] . '</td>';
echo '<td>'. $row['u_role'] . '</td>';
echo '<td>'. $row['u_name'] . '</td>';
echo '<td>'. $row['u_passw'] . '</td>';
echo '<td>'. $row['u_init'] . '</td>';
echo '<td>'. $row['c_name'] . '</td>';
echo '<td>'. $row['u_mail'] . '</td>';
echo '<td>'.'<img src="'. $row['u_pic'] . '" width=45 height=45></img>'.'</td>';
}
``` | 58 |
36,215,958 | I want to filter the moment of a day only with hour and minutes.
For example, a function that return true if now is between the 9.15 and 11.20 of the day.
I tried with datetime but with the minutes is littlebit complicated.
```
#!/usr/bin/python
import datetime
n = datetime.datetime.now()
sta = datetime.time(19,18)
sto = datetime.time(20,19)
if sta.hour <= n.hour and n.hour <= sto.hour:
if sta.minute <= n.minute and sto.minute <= n.minute:
print str(n.hour) + ":" + str(n.minute)
```
What is the best way?
Regards | 2016/03/25 | [
"https://Stackoverflow.com/questions/36215958",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/341022/"
] | You can use tuple comparisons to do any subinterval comparisons pretty easily:
```
>>> def f(dt):
... return (9, 15) <= (dt.hour, dt.minute) < (11, 21)
...
>>> d = datetime.datetime.now()
>>> str(d)
'2016-03-25 09:50:51.782718'
>>> f(d)
True
>>> f(d + datetime.timedelta(hours=2)
False
```
This accepts any datetime that has time between 9:15:00.000000 and 11:20:59.999999.
---
The above method also works if you need to check for example 5 first minutes of each hour; but for the hours of day, it might be simpler to use `.time()` to get the time part of a datetime, then compare this to the limits. The following accepts any time between 9:15:00.000000 and 11:20:00.000000 (inclusive):
```
>>> def f(dt):
... return datetime.time(9, 15) <= dt.time() <= datetime.time(11, 20)
``` | You'll need to use the combine class method:
```
import datetime
def between():
now = datetime.datetime.now()
start = datetime.datetime.combine(now.date(), datetime.time(9, 15))
end = datetime.datetime.combine(now.date(), datetime.time(11, 20))
return start <= now < end
``` | 61 |
5,965,655 | I'm trying to build a web interface for some python scripts. The thing is I have to use PHP (and not CGI) and some of the scripts I execute take quite some time to finish: 5-10 minutes. Is it possible for PHP to communicate with the scripts and display some sort of progress status? This should allow the user to use the webpage as the task runs and display some status in the meantime or just a message when it's done.
Currently using exec() and on completion I process the output. The server is running on a Windows machine, so pcntl\_fork will not work.
**LATER EDIT**:
Using another php script to feed the main page information using ajax doesn't seem to work because the server kills it (it reaches max execution time, and I don't really want to increase this unless necessary)
I was thinking about socket based communication but I don't see how is this useful in my case (some hints, maybe?
Thank you | 2011/05/11 | [
"https://Stackoverflow.com/questions/5965655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/748676/"
] | You want *inter-process communication*. Sockets are the first thing that comes to mind; you'd need to set up a socket to *listen* for a connection (on the same machine) in PHP and set up a socket to *connect* to the listening socket in Python and *send* it its status.
Have a look at [this socket programming overview](http://docs.python.org/howto/sockets.html) from the Python documentation and [the Python `socket` module's documentation (especially the examples at the end)](http://docs.python.org/library/socket.html). I'm sure PHP has similar resources.
Once you've got an more specific idea of what you want to build and need help, feel free to ask a *new* question on StackOverflow (if it isn't already answered). | I think you would have to use a meta refresh and maybe have the python write the status to a file and then have the php read from it.
You could use AJAX as well to make it more dynamic.
Also, probably shouldn't use exec()...that opens up a world of vulnerabilities. | 62 |
31,480,921 | I can't seem to get the interactive tooltips powered by mpld3 to work with the fantastic lmplot-like scatter plots from seaborn.
I'd love any pointer on how to get this to work! Thanks!
Example Code:
```
# I'm running this in an ipython notebook.
%matplotlib inline
import matplotlib.pyplot as plt, mpld3
mpld3.enable_notebook()
import seaborn as sns
N=10
data = pd.DataFrame({"x": np.random.randn(N),
"y": np.random.randn(N),
"size": np.random.randint(20,200, size=N),
"label": np.arange(N)
})
scatter_sns = sns.lmplot("x", "y",
scatter_kws={"s": data["size"]},
robust=False, # slow if true
data=data, size=8)
fig = plt.gcf()
tooltip = mpld3.plugins.PointLabelTooltip(fig, labels=list(data.label))
mpld3.plugins.connect(fig, tooltip)
mpld3.display(fig)
```
I'm getting the seaborn plot along with the following error:
```
Javascript error adding output!
TypeError: obj.elements is not a function
See your browser Javascript console for more details.
```
The console shows:
```
TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:319 Javascript error adding output! TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:338 TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
outputarea.js:319 Javascript error adding output! TypeError: obj.elements is not a function
at mpld3_TooltipPlugin.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1161:9)
at mpld3_Figure.draw (https://mpld3.github.io/js/mpld3.v0.2.js:1400:23)
at Object.mpld3.draw_figure (https://mpld3.github.io/js/mpld3.v0.2.js:18:9)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:14:14)
at eval (eval at <anonymous> (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231), <anonymous>:15:5)
at eval (native)
at Function.x.extend.globalEval (https://mbcomp1:9999/static/components/jquery/jquery.min.js:4:4231)
at x.fn.extend.domManip (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:21253)
at x.fn.extend.append (https://mbcomp1:9999/static/components/jquery/jquery.min.js:5:18822)
at OutputArea._safe_append (https://mbcomp1:9999/static/notebook/js/outputarea.js:336:26)
``` | 2015/07/17 | [
"https://Stackoverflow.com/questions/31480921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1270151/"
] | I don't think that there is an easy way to do this currently. I can get some of the tooltips to show by replacing your `tooltip` constructor with the following:
```
ax = plt.gca()
pts = ax.get_children()[3]
tooltip = mpld3.plugins.PointLabelTooltip(pts, labels=list(data.label))
```
This only works for the points outside of the uncertainty interval, though. I think it would be possible to extend `seaborn` to make these points highest in the `zorder` and store them in in the instance somewhere so that you don't need do pull them out of the axis children list. Perhaps worth a feature request. | Your code works for me on `ipython` (no notepad) when saving the figure to file with `mpld3.save_html(fig,"./out.html")`. May be an issue with `ipython` `notepad`/`mpld3` compatibility or `mpld3.display` (which causes an error for me, although I think this is related to an old version of matplotlib on my computer).
The full code which worked for me is,
```
import numpy as np
import matplotlib.pyplot as plt, mpld3
import seaborn as sns
import pandas as pd
N=10
data = pd.DataFrame({"x": np.random.randn(N),
"y": np.random.randn(N),
"size": np.random.randint(20,200, size=N),
"label": np.arange(N)
})
scatter_sns = sns.lmplot("x", "y",
scatter_kws={"s": data["size"]},
robust=False, # slow if true
data=data, size=8)
fig = plt.gcf()
tooltip = mpld3.plugins.PointLabelTooltip(fig, labels=list(data.label))
mpld3.plugins.connect(fig, tooltip)
mpld3.save_html(fig,"./out.html")
``` | 68 |
28,180,252 | I am trying to create a quiver plot from a NetCDF file in Python using this code:
```
import matplotlib.pyplot as plt
import numpy as np
import netCDF4 as Dataset
ncfile = netCDF4.Dataset('30JUNE2012_0300UTC.cdf', 'r')
dbZ = ncfile.variables['MAXDBZF']
data = dbZ[0,0]
U = ncfile.variables['UNEW'][:]
V = ncfile.variables['VNEW'][:]
x, y= np.arange(0,2*np.pi,.2), np.arange(0,2*np.pi,.2)
X,Y = np.meshgrid(x,y)
plt.quiver(X,Y,U,V)
plt.show()
```
and I am getting the following errors
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-109-b449c540a7ea> in <module>()
11 X,Y = np.meshgrid(x,y)
12
---> 13 plt.quiver(X,Y,U,V)
14
15 plt.show()
/Users/felishalawrence/anaconda/lib/python2.7/site-packages/matplotlib/pyplot.pyc in quiver(*args, **kw)
3152 ax.hold(hold)
3153 try:
-> 3154 ret = ax.quiver(*args, **kw)
3155 draw_if_interactive()
3156 finally:
/Users/felishalawrence/anaconda/lib/python2.7/site-packages/matplotlib/axes/_axes.pyc in quiver(self, *args, **kw)
4162 if not self._hold:
4163 self.cla()
-> 4164 q = mquiver.Quiver(self, *args, **kw)
4165
4166 self.add_collection(q, autolim=True)
/Users/felishalawrence/anaconda/lib/python2.7/site-packages/matplotlib/quiver.pyc in __init__(self, ax, *args, **kw)
415 """
416 self.ax = ax
--> 417 X, Y, U, V, C = _parse_args(*args)
418 self.X = X
419 self.Y = Y
/Users/felishalawrence/anaconda/lib/python2.7/site-packages/matplotlib/quiver.pyc in _parse_args(*args)
377 nr, nc = 1, U.shape[0]
378 else:
--> 379 nr, nc = U.shape
380 if len(args) == 2: # remaining after removing U,V,C
381 X, Y = [np.array(a).ravel() for a in args]
ValueError: too many values to unpack
```
What does this error mean? | 2015/01/27 | [
"https://Stackoverflow.com/questions/28180252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4500459/"
] | `ValueError: too many values to unpack` is because the line `379` of your program is trying to assign two variables (`nr`, `nc`) from `U.shape` when there are not enough variables to assign these values to.
Look above on line `377` - that is correctly assigning two values (`1` and `U.shape[0]` to `nr` and `nc` but line `379` has only a `U.shape` object to assign to two variables. If there are more than 2 values in `U.shape` you will get this error. It was made clear that `U.shape` is actually a tuple with at least two values which means that this code would work as-is as long as there are an equal amount of values to assign to the variables (in this case two). I would print out the value of `U.shape` and determine that it holds the expected values and quantity of values. If you `U.shape` can return two or more values then your code will need to learn how to adapt to this. For example if you find that `U.shape` is a tuple of 3 values then you will need 3 variables to hold those values like so:
`nr, nc, blah = U.shape`
Consider the following:
```
a,b,c = ["a","b","c"] #works
print a
print b
print c
a, b = ["a","b","c"] #will result in error because 3 values are trying to be assigned to only 2 variables
```
The results from the above code:
```
a
b
c
Traceback (most recent call last):
File "None", line 7, in <module>
ValueError: too many values to unpack
```
So you see it's just a matter of having enough values to assign to all of the variables that are requesting a value. | Probably more useful to solve future problems rather then author's but still:
The problem was likely that the netcdf file had a time dimension, therefore U and V where 3 dimensional arrays - you should choose the time slice or aggregate the data across the time dimension. | 71 |
36,486,120 | I'm trying to centre and normalise a data set in python with the following code
```
mean = np.mean(train, axis=0)
std = np.std(train, axis=0)
norm_train = (train - mean) / std
```
The problem is that I get a devision by zero error. Two of the values in the data set end up having a zero std. The data set if of shape (3750, 55). My stats skills are not so strong so I'm not sure how to overcome this. Any suggestions? | 2016/04/07 | [
"https://Stackoverflow.com/questions/36486120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/531296/"
] | Since the [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) is calculated by taking the sum of the *squared* deviations from the mean, a zero standard deviation can only be possible when all the values of a variable are the same (all equal to the mean). In this case, those variables have no discriminative power so they can be removed from the analysis. They cannot improve any classification, clustering or regression task. Many implementations will do it for you or throw an error about a matrix calculation. | You could just replace the 0 std to 1 for that feature. This would basically mean that the scaled value would be zero for all the data points for that feature. This makes sense as this implies that the feature values do not deviate even a bit form the mean(as the values is constant, the constant is the mean.)
FYI- This is what sklearn does!
<https://github.com/scikit-learn/scikit-learn/blob/7389dbac82d362f296dc2746f10e43ffa1615660/sklearn/preprocessing/data.py#L70> | 72 |
3,950,368 | >
> **Possible Duplicate:**
>
> [What do I use for a max-heap implementation in Python?](https://stackoverflow.com/questions/2501457/what-do-i-use-for-a-max-heap-implementation-in-python)
>
>
>
Python has a min heap implemented in the heapq module. However, if one would want a max heap, would one have to build from scratch? | 2010/10/16 | [
"https://Stackoverflow.com/questions/3950368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/475790/"
] | You could multiply your numbers by -1 and use the min heap. | No need to implement a max heap from scratch. You can easily employ a bit of math to turn your min heap into a max heap!
See [this](http://www.mail-archive.com/python-list@python.org/msg238926.html) and [this](http://code.activestate.com/recipes/502295/) - but really [this SO answer](https://stackoverflow.com/questions/2501457/what-do-i-use-for-a-max-heap-implementation-in-python). | 77 |
55,522,649 | I have installed numpy but when I import it, it doesn't work.
```
from numpy import *
arr=array([1,2,3,4])
print(arr)
```
Result:
```
C:\Users\YUVRAJ\PycharmProjects\mycode2\venv\Scripts\python.exe C:/Users/YUVRAJ/PycharmProjects/mycode2/numpy.py
Traceback (most recent call last):
File "C:/Users/YUVRAJ/PycharmProjects/mycode2/numpy.py", line 1, in <module>
from numpy import *
File "C:\Users\YUVRAJ\PycharmProjects\mycode2\numpy.py", line 2, in <module>
x=array([1,2,3,4])
NameError: name 'array' is not defined
Process finished with exit code 1
``` | 2019/04/04 | [
"https://Stackoverflow.com/questions/55522649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11313285/"
] | The problem is you named your script as `numpy.py`, which is a conflict with the module numpy that you need to use. Just rename your script to something else and will be fine. | Instead of using `from numpy import *`
Try using this:
```
import numpy
from numpy import array
```
And then add your code:
```
arr=array([1,2,3,4])
print(arr)
```
---
**EDIT:** Even though this is the accepted answer, this may not work under all circumstances. If this doesn't work, see [adrtam's answer](https://stackoverflow.com/a/55522733/5721784). | 78 |
24,703,432 | I am attempting to catch messages by topic by using the message\_callback\_add() function in [this library](https://pypi.python.org/pypi/paho-mqtt#usage-and-api). Below is my entire module that I am using to deal with my mqtt subscribe and publishing needs. I have been able to test that the publish works, but I can't seem to catch any incoming messages. There are no warnings/errors of any kind and the `print("position")` statements are working for 1 and 2 only.
```
import sys
import os
import time
import Things
import paho.mqtt.client as paho
global mqttclient;
global broker;
global port;
broker = "10.64.16.199";
port = 1883;
mypid = os.getpid()
client_uniq = "pubclient_"+str(mypid)
mqttclient = paho.Client(client_uniq, False) #nocleanstart
mqttclient.connect(broker, port, 60)
mqttclient.subscribe("Commands/#")
def Pump_callback(client, userdata, message):
#print("Received message '" + str(message.payload) + "' on topic '"
# + message.topic + "' with QoS " + str(message.qos))
print("position 3")
Things.set_waterPumpSpeed(int(message.payload))
def Valve_callback(client, userdata, message):
#print("Received message '" + str(message.payload) + "' on topic '"
# + message.topic + "' with QoS " + str(message.qos))
print("position 4")
Things.set_valvePosition(int(message.payload))
mqttclient.message_callback_add("Commands/PumpSpeed", Pump_callback)
mqttclient.message_callback_add("Commands/ValvePosition", Valve_callback)
print("position 1")
mqttclient.loop_start()
print("position 2")
def pub(topic, value):
mqttclient.publish(topic, value, 0, True)
``` | 2014/07/11 | [
"https://Stackoverflow.com/questions/24703432",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2851048/"
] | I called `loop_start` in the wrong place.
I moved the call to right after the connect statement and it now works.
Here is the snippet:
```
client_uniq = "pubclient_"+str(mypid)
mqttclient = paho.Client(client_uniq, False) #nocleanstart
mqttclient.connect(broker, port, 60)
mqttclient.loop_start()
mqttclient.subscribe("FM_WaterPump/Commands/#")
```
In the documentation on loop\_start it alludes to calling `loop_start()` after or before connect though it should say immediately before or after to clarify.
Snippet of the documentation:
>
> These functions implement a threaded interface to the network loop. Calling loop\_start() once, before or after connect\*(), runs a thread in the background to call loop() automatically. This frees up the main thread for other work that may be blocking. This call also handles reconnecting to the broker. Call loop\_stop() to stop the background thread.
>
>
> | `loop_start()` will return immediately, so your program will quit before it gets chance to do anything.
You've also called `subscribe()` before `message_callback_add()` which doesn't make sense, although in this specific example it probably doesn't matter. | 79 |
23,190,348 | Has the alsaaudio library been ported to python3? i have this working on python 2.7 but not on python 3.
is there another library for python 3 if the above cannot be used? | 2014/04/21 | [
"https://Stackoverflow.com/questions/23190348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/612242/"
] | I have compiled alsaaudio for python3 manually.
You can install it by following the steps given below.
1. Make sure that **gcc, python3-dev, libasound2-dev** packages are installed in your machine (install them using synaptic if you are using Ubuntu).
2. Download and extract the following package
<http://sourceforge.net/projects/pyalsaaudio/files/pyalsaaudio-0.7.tar.gz/download>
3. Go to the extracted folder and execute the following commands (Execute the commands as root or use sudo)
```
python3 setup.py build
python3 setup.py install
```
HTH.. | It's now called pyalsaaudio.
For me pip install pyalsaaudio worked. | 80 |
66,929,254 | Is there a library for interpreting python code within a python program?
Sample usage might look like this..
```
code = """
def hello():
return 'hello'
hello()
"""
output = Interpreter.run(code)
print(output)
```
which then outputs
`hello` | 2021/04/03 | [
"https://Stackoverflow.com/questions/66929254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12594122/"
] | found this example from grepper
```
the_code = '''
a = 1
b = 2
return_me = a + b
'''
loc = {}
exec(the_code, globals(), loc)
return_workaround = loc['return_me']
print(return_workaround)
```
apparently you can pass global and local scope into `exec`. In your use case, you would just use a named variable instead of returning. | You can use the `exec` function. You can't get the return value from the code variable. Instead you can print it there itself.
```
code = """
def hello():
print('hello')
hello()
"""
exec(code)
``` | 81 |
65,697,374 | So I am a beginner at python, and I was trying to install packages using pip. But any time I try to install I keep getting the error:
>
> ERROR: Could not install packages due to an EnvironmentError: [WinError 2] The system cannot find the file specified: 'c:\python38\Scripts\sqlformat.exe' -> 'c:\python38\Scripts\sqlformat.exe.deleteme'
>
>
>
How do I fix this? | 2021/01/13 | [
"https://Stackoverflow.com/questions/65697374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14996295/"
] | Try running command line as administrator. The issue looks like its about permission. To run as administrator. Type cmd on search bar and right click on icon of command prompt. There you will find an option of run as administrator. Click the option and then try to install package | Looks like a permissions error. You might try starting the installation with admin rights or install the package only for your current user with:
```
pip install --user package
``` | 82 |
59,662,028 | I am trying to retrieve app related information from Google Play store using selenium and BeautifulSoup. When I try to retrieve the information, I got webdriver exception error. I checked the chrome version and chrome driver version (both are compatible). Here is the weblink that is causing the issue, code to retrieve information, and error thrown by the code:
Link: <https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true>
Code:
```
driver = webdriver.Chrome('path')
driver.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
soup = bs.BeautifulSoup(driver.page_source, 'html.parser')
```
I am getting error on third line. Here is the parts of the error message:
Start of the error message:
```
---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
<ipython-input-280-4e8a1ef443f2> in <module>()
----> 1 soup = bs.BeautifulSoup(driver.page_source, 'html.parser')
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py in page_source(self)
676 driver.page_source
677 """
--> 678 return self.execute(Command.GET_PAGE_SOURCE)['value']
679
680 def close(self):
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py in execute(self, driver_command, params)
318 response = self.command_executor.execute(driver_command, params)
319 if response:
--> 320 self.error_handler.check_response(response)
321 response['value'] = self._unwrap_value(
322 response.get('value', None))
~/anaconda3/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
240 alert_text = value['alert'].get('text')
241 raise exception_class(message, screen, stacktrace, alert_text)
--> 242 raise exception_class(message, screen, stacktrace)
243
244 def _value_or_default(self, obj, key, default):
WebDriverException: Message: unknown error: bad inspector message:
```
End of the error message:
```
(Session info: chrome=79.0.3945.117)
```
Could anyone guide me how to fix the issue? | 2020/01/09 | [
"https://Stackoverflow.com/questions/59662028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2293224/"
] | I think this is due to the chromedriver encoding problem.
See <https://bugs.chromium.org/p/chromium/issues/detail?id=723592#c9> for additional information about this bug.
Instead of selenium you can get page source using BeautifulSoup as follows.
```
import requests
from bs4 import BeautifulSoup
r = requests.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
soup = BeautifulSoup(r.content, "lxml")
print(soup)
``` | try this
```
driver = webdriver.Chrome('path')
driver.get('https://play.google.com/store/apps/details?id=com.tudasoft.android.BeMakeup&hl=en&showAllReviews=true')
# retrieve data you want, for example
review_user_list = driver.find_elements_by_class_name("X43Kjb")
``` | 84 |
36,781,198 | I'm sending an integer from python using pySerial.
```
import serial
ser = serial.Serial('/dev/cu.usbmodem1421', 9600);
ser.write(b'5');
```
When i compile,the receiver LED on arduino blinks.However I want to cross check if the integer is received by arduino. I cannot use Serial.println() because the port is busy. I cannot run serial monitor first on arduino and then run the python script because the port is busy. How can i achieve this? | 2016/04/21 | [
"https://Stackoverflow.com/questions/36781198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6237876/"
] | A simple way to do it using the standard library :
```
import java.util.Scanner;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.ThreadPoolExecutor;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
public class Example {
private static final int POOL_SIZE = 5;
private static final ExecutorService WORKERS = new ThreadPoolExecutor(POOL_SIZE, POOL_SIZE, 1, MILLISECONDS, new LinkedBlockingDeque<>());
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
while (true) {
System.out.print("> ");
String cmd = sc.nextLine();
switch (cmd) {
case "process":
WORKERS.submit(newExpensiveTask());
break;
case "kill":
System.exit(0);
default:
System.err.println("Unrecognized command: " + cmd);
}
}
}
private static Runnable newExpensiveTask() {
return () -> {
try {
Thread.sleep(10000);
System.out.println("Done processing");
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
};
}
}
```
This code lets you run heavy tasks asynchronously while the user terminal remains available and reactive. | I would recommend reading up on specific tutorials, such as the Java Language Tutorial (available as a book - at least, it used to be - as well as on the Java website <https://docs.oracle.com/javase/tutorial/essential/concurrency/>)
However as others have cautioned, getting into threading is a challenge and requires good knowledge of the language quite apart from the aspects of multithreading and synchronization. I'd be tempted to recommend you read some of the other tutorials - working through IO and so on - first of all. | 87 |
34,685,486 | After installing my python project with `setup.py` and executing it in terminal I get the following error:
```
...
from ui.mainwindow import MainWindow
File "/usr/local/lib/python2.7/dist-packages/EpiPy-0.1-py2.7.egg/epipy/ui/mainwindow.py", line 9, in <module>
from model.sir import SIR
ImportError: No module named model.sir
```
...
We assume we have the following structure of our project `cookies`:
```
.
├── setup.py
└── src
├── a
│ ├── aa.py
│ └── __init__.py
├── b
│ ├── bb.py
│ └── __init__.py
├── __init__.py
└── main.py
```
File: `cookies/src/main.py`
```
from a import aa
def main():
print aa.get_aa()
```
File `cookies/src/a/aa.py`
```
from b import bb
def get_aa():
return bb.get_bb()
```
File: `cookies/src/b/bb.py`
```
def get_bb():
return 'bb'
```
File: `cookies/setup.py`
```
#!/usr/bin/env python
import os
import sys
try:
from setuptools import setup, find_packages
except ImportError:
raise ImportError("Install setup tools")
setup(
name = "cookies",
version = "0.1",
author = "sam",
description = ("test"),
license = "MIT",
keywords = "test",
url = "asd@ads.asd",
packages=find_packages(),
classifiers=[
"""\
Development Status :: 3 - Alpha
Operating System :: Unix
"""
],
entry_points = {'console_scripts': ['cookies = src.main:main',],},
)
```
If I install `cookies` as `root` with `$ python setup.py install` and execute `cookies` I get the following error: `ImportError: No module named b`. How can I solve the problem. | 2016/01/08 | [
"https://Stackoverflow.com/questions/34685486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2609713/"
] | What I would do is to use absolute imports everywhere (from epipy import ...). That's what is recommanded in [PEP 328](https://docs.python.org/2.5/whatsnew/pep-328.html).
Your imports won't work anymore if the project is not installed. You can add the project directory to your PYTHONPATH, install the package, or, what I do when I'm in the middle of developing packages, [install with the 'editable' option](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs) : `pip install -e`
In editable mode, instead of installing the package code in your python distribution, a pointer to your project is created. That way it is importable, but the package uses the live code in development.
Example:
I am developing a package in /home/jbchouinard/mypackage. Inside my code, I use absolute imports, e.g. `from mypackage import subpackage`.
If I install with `pip install`, the package will be installed in my distribution, let's say in /usr/lib/python2.7/dist-packages. If I make further changes to the package, I have to upgrade or uninstall/reinstall the package. This can get tedious quickly.
If I install with `pip install -e`, a pointer (a .pth file) is created in /usr/lib/python2.7/dist-packages towards /home/jbchouinard/mypackage. I can `import mypackage` as if it was installed normally, but the code used is the code at /home/jbchouinard/mypackage; any change is reflected immediately. | I had a similar issue with one of my projects.
I've been able to solve my issue by adding this line at the start of my module (before all imports besides sys & os, which are required for this insert), so that it would include the parent folder and by that it will be able to see the parent folder (turns out it doesn't do that by default):
```
import sys
import os
sys.path.insert(1, os.path.join(sys.path[0], '..'))
# all other imports go here...
```
This way, your main.py will include the parent folder (epipy).
Give that a try, hope this helps :-) | 88 |
42,968,543 | I have a file displayed as follows. I want to delete the lines start from `>rev_` until the next line with `>`, not delete the `>` line. I want a python code to realize it.
input file:
```
>name1
fgrsagrhshsjtdkj
jfsdljgagdahdrah
gsag
>rev_name1 # delete from here
jfdsfjdlsgrgagrehdsah
fsagasfd # until here
>name2
jfosajgreajljioesfg
fjsdsagjljljlj
>rev_name2 # delete from here
jflsajgljkop
ljljasffdsa # until here
>name3
.......
```
output file:
```
>name1
fgrsagrhshsjtdkj
jfsdljgagdahdrah
gsag
>name2
jfosajgreajljioesfg
fjsdsagjljljlj
>name3
.......
```
My code is as follows, but it can not work.
```
mark = {}
with open("human.fasta") as inf, open("human_norev.fasta",'w') as outf:
for line in inf:
if line[0:5] == '>rev_':
mark[line] = 1
elif line[0] == '>':
mark[line] = 0
if mark[line] == 0:
outf.write(line)
``` | 2017/03/23 | [
"https://Stackoverflow.com/questions/42968543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4672728/"
] | I'd recommend at least trying to come up with a solution on your own before asking us on here. Ask yourself questions regarding what different ways I can work towards a solution, will parsing character by character/line by line/regex be sufficient for this problem.
But in this case since determining when to start and stop removing lines was always at the start of the line it made sense to just go line by line and check the starting few characters.
```
i = """>name1
fgrsagrhshsjtdkj
jfsdljgagdahdrah
gsag
>rev_name1 # delete from here
jfdsfjdlsgrgagrehdsah
fsagasfd # until here
>name2
jfosajgreajljioesfg
fjsdsagjljljlj
>rev_name2 # delete from here"""
final_string = ""
keep_line = True
for line in i.split('\n'):
if line[0:5] == ">rev_":
keep_line = False
elif line[0] == '>':
keep_line = True
if keep_line:
final_string += line + '\n'
print(final_string)
```
If you wanted the lines to just go directly to console you could remove the print at the end and replace `final_string += line + '\n'` with a `print(line)`. | The code also can be as follows:
```
with open("human.fasta") as inf, open("human_norev.fasta",'w') as outf:
del_start = False
for line in inf:
if line.startswith('>rev_'):
del_start = True
elif line.startswith('>'):
del_start = False
if not del_start:
outf.write(line)
``` | 89 |
49,396,554 | Okay, so I have the following issue. I have a Mac, so the the default Python 2.7 is installed for the OS's use. However, I also have Python 3.6 installed, and I want to install a package using Pip that is only compatible with python version 3. How can I install a package with Python 3 and not 2? | 2018/03/21 | [
"https://Stackoverflow.com/questions/49396554",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9525828/"
] | To download use
```
pip3 install package
```
and to run the file
```
python3 file.py
``` | Why do you ask such a thing here?
<https://docs.python.org/3/using/mac.html>
>
> 4.3. Installing Additional Python Packages
> There are several methods to install additional Python packages:
>
>
> Packages can be installed via the standard Python distutils mode (python setup.py install).
> Many packages can also be installed via the setuptools extension or pip wrapper, see <https://pip.pypa.io/>.
>
>
>
<https://pip.pypa.io/en/stable/user_guide/#installing-packages>
>
> Installing Packages
> pip supports installing from PyPI, version control, local projects, and directly from distribution files.
>
>
> The most common scenario is to install from PyPI using Requirement Specifiers
>
>
> `$ pip install SomePackage` # latest version
> `$ pip install SomePackage==1.0.4` # specific version
> `$ pip install 'SomePackage>=1.0.4'` # minimum version
> For more information and examples, see the pip install reference.
>
>
> | 91 |
57,754,497 | So I think tensorflow.keras and the independant keras packages are in conflict and I can't load my model, which I have made with transfer learning.
Import in the CNN ipynb:
```
!pip install tensorflow-gpu==2.0.0b1
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
```
Loading this pretrained model
```
base_model = keras.applications.xception.Xception(weights="imagenet",
include_top=False)
avg = keras.layers.GlobalAveragePooling2D()(base_model.output)
output = keras.layers.Dense(n_classes, activation="softmax")(avg)
model = keras.models.Model(inputs=base_model.input, outputs=output)
```
Saving with:
```
model.save('Leavesnet Model 2.h5')
```
Then in the new ipynb for the already trained model (the imports are the same as in the CNN ipynb:
```
from keras.models import load_model
model =load_model('Leavesnet Model.h5')
```
I get the error:
```
AttributeError Traceback (most recent call last)
<ipython-input-4-77ca5a1f5f24> in <module>()
2 from keras.models import load_model
3
----> 4 model =load_model('Leavesnet Model.h5')
13 frames
/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in placeholder(shape, ndim, dtype, sparse, name)
539 x = tf.sparse_placeholder(dtype, shape=shape, name=name)
540 else:
--> 541 x = tf.placeholder(dtype, shape=shape, name=name)
542 x._keras_shape = shape
543 x._uses_learning_phase = False
AttributeError: module 'tensorflow' has no attribute 'placeholder'
```
I think there might be a conflict between tf.keras and the independant keras, can someone help me out? | 2019/09/02 | [
"https://Stackoverflow.com/questions/57754497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10780811/"
] | Yes, there is a conflict between `tf.keras` and `keras` packages, you trained the model using `tf.keras` but then you are loading it with the `keras` package. That is not supported, you should use only one version of this package.
The specific problem is that you are using TensorFlow 2.0, but the standalone `keras` package does not support TensorFlow 2.0 yet. | Try to replace
```
from keras.models import load_model
model =load_model('Leavesnet Model.h5')
```
with
`model = tf.keras.models.load_model(model_path)`
It works for me, and I am using:
tensorflow version: 2.0.0
keras version: 2.3.1
You can check the following:
<https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model?version=stable> | 93 |
66,196,791 | So take a triangle formatted as a nested list.
e.g.
```
t = [[5],[3, 6],[8, 14, 7],[4, 9, 2, 0],[9, 11, 5, 2, 9],[1, 3, 8, 5, 3, 2]]
```
and define a path to be the sum of elements from each row of the triangle,
moving 1 to the left or right as you go down rows. Or in python
the second index either stays the same or we add 1 to it.
```
a_path = [t[0][0],[t[1][1]],t[2][1],t[3][1],t[4][2],t[5][3]] = [5, 6, 14, 9, 5,5] is valid
not_a_path = [t[0][0],[t[1][0]],t[2][2],t[3][1],t[4][0],t[5][4]] = [5, 3, 7, 9, 9, 3] is not valid
```
For a triangle as small as this example this can obviously be done via brute force.
I wrote a function like that, for a 20 row triangle it takes about 1 minuite.
I need a function that can do this for a 100 row triangle.
I found this code on <https://rosettacode.org/wiki/Maximum_triangle_path_sum#zkl> and it agrees with all the results my terrible function outputs for small triangles I've tried, and using %time in the console it can do the 100 line triangle in 0 ns so relatively quick.
```
def maxPathSum(rows):
return reduce(
lambda xs, ys: [
a + max(b, c) for (a, b, c) in zip(ys, xs, xs[1:])
],
reversed(rows[:-1]), rows[-1]
)
```
So I started taking bits of this, and using print statements and the console to work out what it was doing. I get that `reversed(rows[:-1]), rows[-1]` is reversing the triangle so that we can iterate from all possible final values on the last row through the sums of their possible paths to get to that value, and that as a,b,c iterate: a is a number from the bottom row, b is the second from bottom row, c is the third from bottom row. And as they iterate I think `a + max(b,c)` seems to sum a with the greatest number on b or c, but when I try to find the max of either two lists or a nested list in the console the list returned seems completely arbitrary.
```
ys = t[-1]
xs = list(reversed(t[:-1]))
for (a, b, c) in zip(ys, xs, xs[1:]):
print(b)
print(c)
print(max(b,c))
print("")
```
prints
```
[9, 11, 5, 2, 9]
[4, 9, 2, 0]
[9, 11, 5, 2, 9]
[4, 9, 2, 0]
[8, 14, 7]
[8, 14, 7]
[8, 14, 7]
[3, 6]
[8, 14, 7]
[3, 6]
[5]
[5]
```
If max(b,c) returned the list containing max(max(b),max(c)) then b = [3, 6], c = [5] would return b, so not that. If max(b,c) returned the list with the greatest sum, max(sum(b),sum(c)), then the same example contradicts it. It doesn't return the list containg minimum value or the one with the greatest mean, so my only guess is that the fact that I set `xs = list(reversed(t[:-1]))` is the problem and that it works fine if its an iterator inside the lambda function but not in console.
Also trying to find `a + max (b,c)` gives me this error, which makes sense.
```
TypeError: unsupported operand type(s) for +: 'int' and 'list'
```
My best guess is again that the different definition of xs as a list is the problem. If true I would like to know how this all works in the context of being iterators in the lambda function. I think I get what reduce() and zip() are doing, so mostly just the lambda function is what's confusing me.
Thanks in advance for any help | 2021/02/14 | [
"https://Stackoverflow.com/questions/66196791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15208320/"
] | We can simplify the expression a bit by including all the rows in the second argument to reduce - there's no reason to pass the last row as third parameter (the starting value) of `reduce`.
Then, it really helps to give your variables meaningful names, which the original code badly fails to do.
So, this becomes:
```
from functools import reduce
def maxPathSum(rows):
return reduce(
lambda sums, upper_row: [cell + max(sum_left, sum_right)
for (cell, sum_left, sum_right)
in zip(upper_row, sums, sums[1:])],
reversed(rows)
)
```
On the first iteration, `sums` will be the last row, and `upper_row` the one over it.
The lambda will calculate the best possible sums by adding each value of the upper row with the largest value of `sums` to its left or right.
It zips the upper row with the sums (the last sum won't be used, as there is one too much), and the sums shifted by one value. So, zip will provide us with a triplet (value from upper row (`cell`), sum underneath to its left (`sum_left`), sum underneath to its right (`sum_right`). The best possible sum at this point is our current cell + the largest of theses sums.
The lambda returns this new row of sums, which will be used as the first parameter of reduce (`sums`) on the next iteration, while `upper_row` becomes the next row in `reversed(rows)`.
In the end, `reduce` returns the last row of sums, which contains only one value, our best possible total:
```
[53]
``` | you can spell out the lambda function so it can print. does this help you understand?
```
t = [[5],[3, 6],[8, 14, 7],[4, 9, 2, 0],[9, 11, 5, 2, 9],[1, 3, 8, 5, 3, 2]]
def g( xs, ys):
ans=[a + max(b, c) for (a, b, c) in zip(ys, xs, xs[1:])]
print(ans)
return ans
def maxPathSum(rows):
return reduce(
g,
reversed(rows[:-1]), rows[-1]
)
maxPathSum(t)
``` | 96 |
2,291,176 | I need to arrange some kind of encrpytion for generating user specific links. Users will be clicking this link and at some other view, related link with the crypted string will be decrypted and result will be returned.
For this, I need some kind of encryption function that consumes a number(or a string) that is the primary key of my selected item that is bound to the user account, also consuming some kind of seed and generating encryption code that will be decrypted at some other page.
so something like this
```
my_items_pk = 36 #primary key of an item
seed = "rsdjk324j23423j4j2" #some string for crypting
encrypted_string = encrypt(my_items_pk,seed)
#generates some crypted string such as "dsaj2j213jasas452k41k"
and at another page:
decrypt_input = encrypt(decypt,seed)
print decrypt_input
#gives 36
```
I want my "seed" to be some kind of primary variable (not some class) for this purpose (ie some number or string).
How can I achieve this under python and django ? | 2010/02/18 | [
"https://Stackoverflow.com/questions/2291176",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/151937/"
] | There are no encryption algorithms, per se, built in to Python. However, you might want to look at the [Python Cryptography Toolkit](http://www.dlitz.net/software/pycrypto/) (PyCrypt). I've only tinkered with it, but it's referenced in Python's documentation on [cryptographic services](http://docs.python.org/library/crypto.html). Here's an example of how you could encrypt a string with AES using PyCrypt:
```
from Crypto.Cipher import AES
from urllib import quote
# Note that for AES the key length must be either 16, 24, or 32 bytes
encryption_obj = AES.new('abcdefghijklmnop')
plain = "Testing"
# The plaintext must be a multiple of 16 bytes (for AES), so here we pad it
# with spaces if necessary.
mismatch = len(plain) % 16
if mismatch != 0:
padding = (16 - mismatch) * ' '
plain += padding
ciph = encryption_obj.encrypt(plain)
# Finally, to make the encrypted string safe to use in a URL we quote it
quoted_ciph = quote(ciph)
```
You would then make this part of your URL, perhaps as part of a GET request.
To decrypt, just reverse the process; assuming that `encryption_obj` is created as above, and that you've retrieved the relevant part of the URL, this would do it:
```
from urllib import unquote
# We've already created encryption_object as shown above
ciph = unquote(quoted_ciph)
plain = encryption_obj.decrypt(ciph)
```
You also might consider a different approach: one simple method would be to hash the primary key (with a salt, if you wish) and store the hash and pk in your database. Give the user the hash as part of their link, and when they return and present the hash, look up the corresponding pk and return the appropriate object. (If you want to go this route, check out the built-in library [hashlib](http://docs.python.org/library/hashlib.html#module-hashlib).)
As an example, you'd have something like this defined in models.py:
```
class Pk_lookup(models.Model):
# since we're using sha256, set the max_length of this field to 32
hashed_pk = models.CharField(primary_key=True, max_length=32)
key = models.IntegerField()
```
And you'd generate the hash in a view using something like the following:
```
import hashlib
import Pk_lookup
hash = hashlib.sha256()
hash.update(str(pk)) # pk has been defined previously
pk_digest = hash.digest()
lookup = Pk_lookup(hashed_pk=pk_digest,key=pk)
lookup.save()
```
Note that you'd have to quote this version as well; if you prefer, you can use `hexdigest()` instead of `digest` (you wouldn't have to quote the resulting string), but you'll have to adjust the length of the field to 64. | Django has features for this now. See <https://docs.djangoproject.com/en/dev/topics/signing/>
Quoting that page:
"Django provides both a low-level API for signing values and a high-level API for setting and reading signed cookies, one of the most common uses of signing in Web applications.
You may also find signing useful for the following:
* Generating “recover my account” URLs for sending to users who have lost their password.
* Ensuring data stored in hidden form fields has not been tampered with.
* Generating one-time secret URLs for allowing temporary access to a protected resource, for - example a downloadable file that a user has paid for." | 97 |
11,632,154 | In python if I have two dictionaries, specifically Counter objects that look like so
```
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
```
Can I combine these dictionaries so that the results is a dictionary of lists, as follows:
```
c3 = {'item1': [4,6], 'item2':[2,2], 'item3': [5,1], 'item4': [3], 'item5': [9]}
```
where each value is a list of all the values of the preceding dictionaries from the appropriate key, and where there are no matching keys between the two original dictionaries, a new kew is added that contains a one element list. | 2012/07/24 | [
"https://Stackoverflow.com/questions/11632154",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/801348/"
] | ```
from collections import Counter
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
c3 = {}
for c in (c1, c2):
for k,v in c.iteritems():
c3.setdefault(k, []).append(v)
```
`c3` is now: `{'item1': [4, 6], 'item2': [2, 2], 'item3': [5, 1], 'item4': [3], 'item5': [9]}` | Or with a list comprehension:
```
from collections import Counter
c1 = Counter({'item1': 4, 'item2':2, 'item3': 5, 'item4': 3})
c2 = Counter({'item1': 6, 'item2':2, 'item3': 1, 'item5': 9})
merged = {}
for k in set().union(c1, c2):
merged[k] = [d[k] for d in [c1, c2] if k in d]
>>> merged
{'item2': [2, 2], 'item3': [5, 1], 'item1': [4, 6], 'item4': [3], 'item5': [9]}
```
Explanation
-----------
1. Throw all keys that exist into an anonymous set. (It's a set => no duplicate keys)
2. For every key, do 3.
3. For every dictionary d in the list of dictionaries `[c1, c2]`
* Check whether the currently being processed key `k` exists
+ If true: include the expression `d[k]` in the resulting list
+ If not: proceed with next iteration
[Here](http://docs.python.org/tutorial/datastructures.html#list-comprehensions) is a detailed introduction to list comprehension with many examples. | 98 |
51,745,894 | I am new to using python, and am wanting to be able to install packages for python using pip. I am having trouble running pip on my windows computer. When typing in "pip --version" into command prompt I get:
```
ModuleNotFoundError: No module named 'pip._internal'; 'pip' is not a package
```
I have added the scripts folder to the PATH environment variable as shown on the picture in this link
[Environment variables photo](https://i.stack.imgur.com/lXiFz.png)
(Stack overflow does not allow embedded pictures if you are new)
This is the contents of my scripts directory where pip is present:
```
Directory of C:\Users\....\AppData\Local\Programs\Python\Python37-32\Scripts
[.] [..] easy_install-3.7.exe
easy_install.exe pip-script.py pip.exe
pip.exe.manifest pip3 pip3-script.py
pip3.7-script.py pip3.7.exe pip3.7.exe.manifest
pip3.exe pip3.exe.manifest wheel.exe
```
Any help on this would be appreciated | 2018/08/08 | [
"https://Stackoverflow.com/questions/51745894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6814024/"
] | Force a reinstall of pip:
```
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3 get-pip.py --force-reinstall
```
For windows you may have to `choco install curl` or set PATH to where python3 is located | In cmd try using
`py -3.6 -m pip install pygmae`
replace 3.6 with your version of python and add -32 fot 32 bit version
```
py -3.6-32 pip install pygame
```
replace pygame with the module you want to install
this works for most people using python on windows also reboot your pc after adding system variable path | 101 |
62,713,607 | I deployed an Azure Functions App with Python `3.8`. Later on I tried to use dataclasses and it failed with the exception that the version available does not support dataclasses. I then SSHed to the host of the Function App and by using `python --version` figured out that version `3.6` was actually installed. As dataclasses are available from `3.7` on it makes sense why this module can't be used.
But what can I do to actually have version `3.8` running on the Function App host? | 2020/07/03 | [
"https://Stackoverflow.com/questions/62713607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7009990/"
] | This is a known issue (see e.g. <https://learn.microsoft.com/en-us/answers/questions/39124/azure-functions-always-using-python-36.html>) and hopefully fixed soon.
As workaround you can run the following command e.g. in the Cloud shell:
`az functionapp config set --name <func app name> --resource-group <rg name> --subscription <subscription id> --linux-fx-version "DOCKER|mcr.microsoft.com/azure-functions/python:3.0.13353-python3.8-appservice"`
After that you need to wait for a while so that the function app becomes usable again. Additionally I have made the experience that the installed packages are gone. Therefore you need also to republish your functions (having the necessary packages defined in `requirements.txt`). | For anyone running into this problem downgrading to Python 3.6 is a workaround.
I tried @quervernetzt solution but it didn't work, my pipelines started giving the following error.
```
##[error]Error: Error: Failed to deploy web package to App Service. Conflict (CODE: 409)
``` | 102 |
15,424,895 | I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <http://learnpythonthehardway.org/book/>. I've been able to struggle my way through the book up until exercise 48 & 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:
```
lexicon = [
('directions', 'north'),
('directions', 'south'),
('directions', 'east'),
('directions', 'west'),
('verbs', 'go'),
('verbs', 'stop'),
('verbs', 'look'),
('verbs', 'give'),
('stops', 'the'),
('stops', 'in'),
('stops', 'of'),
('stops', 'from'),
('stops', 'at')
]
```
Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:
```
from ex48 import lexicon
>>> print lexicon.scan("go north")
[('verb', 'go'), ('direction', 'north')]
```
Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?
Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever! | 2013/03/15 | [
"https://Stackoverflow.com/questions/15424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2172498/"
] | Based on the ex48 instructions, you could create a few lists for each kind of word. Here's a sample for the first test case. The returned value is a list of tuples, so you can append to that list for each word given.
```
direction = ['north', 'south', 'east', 'west', 'down', 'up', 'left', 'right', 'back']
class Lexicon:
def scan(self, sentence):
self.sentence = sentence
self.words = sentence.split()
stuff = []
for word in self.words:
if word in direction:
stuff.append(('direction', word))
return stuff
lexicon = Lexicon()
```
He notes that numbers and exceptions are handled differently. | Like the most here I am new to the world of coding and I though I attach my solution below as it might help other students.
I already saw a few more efficient approaches that I could implement. However, the code handles every use case of the exercise and since I am wrote it on my own with my beginners mind it does not take complicated shortcuts and should be very easy to understand for other beginners.
I therefore thought it might beneficial for someone else learning. Let me know what you think. Cheers!
```
class Lexicon(object):
def __init__(self):
self.sentence = []
self.dictionary = {
'north' : ('direction','north'),
'south' : ('direction','south'),
'east' : ('direction','east'),
'west' : ('direction','west'),
'down' : ('direction','down'),
'up' : ('direction','up'),
'left' : ('direction','left'),
'right' : ('direction','right'),
'back' : ('direction','back'),
'go' : ('verb','go'),
'stop' : ('verb','stop'),
'kill' : ('verb','kill'),
'eat' : ('verb', 'eat'),
'the' : ('stop','the'),
'in' : ('stop','in'),
'of' : ('stop','of'),
'from' : ('stop','from'),
'at' : ('stop','at'),
'it' : ('stop','it'),
'door' : ('noun','door'),
'bear' : ('noun','bear'),
'princess' : ('noun','princess'),
'cabinet' : ('noun','cabinet'),
}
def scan(self, input):
loaded_imput = input.split()
self.sentence.clear()
for item in loaded_imput:
try:
int(item)
number = ('number', int(item))
self.sentence.append(number)
except ValueError:
word = self.dictionary.get(item.lower(), ('error', item))
self.sentence.append(word)
return self.sentence
lexicon = Lexicon()
``` | 103 |
12,424,351 | I want to run a shell command from python and receive its output with subprocess.Popen. The problem is, when I close the process, sending Ctrl-C, I don't get any output. What am I doing wrong? Code:
```
>>> import subprocess
>>> sub = subprocess.Popen(["xinput", "test", "8"], stdout=subprocess.PIPE) #receive mouse events
>>> output = sub.communicate()[0].read()
^CTraceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/subprocess.py", line 693, in communicate
stdout = self.stdout.read()
KeyboardInterrupt
>>> output
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'output' is not defined
```
Inspired by this post by Jett:
[Reading stdout from xinput test in python](https://stackoverflow.com/questions/12420999/reading-stdout-from-xinput-test-in-python) | 2012/09/14 | [
"https://Stackoverflow.com/questions/12424351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1360544/"
] | The issue here is that the `KeyboardInterrupt` is sent during the call to `communicate`. As a result, `communicate` never returns and so it's output is never stored in the variable `output` and you get the `NameError` when you try to use it. One workaround would be the following:
```
import subprocess
sub = subprocess.Popen(["xinput", "test", "8"], stdout=subprocess.PIPE)
lines = [] #Need someplace to store the data as it comes
try:
for line in sub.stdout: #read one line from standard out, store it in lines
lines.append(line)
except KeyboardInterrupt: #keyboardInterrupt happened. Stop process
sub.terminate()
finally: #Join our lines into a single buffer (like `communicate`)
output = ''.join(lines)
del lines #clean `lines` out of our namespace (just because).
``` | @pythonm already explained the `NameError`.
Furthermore, you're using the output of `Popen.communicate()` conceptually wrong. It returns a 2-tuple of strings: `(stdout, stderr)`. It does not return two file-like objects. That's why your `sub.communicate()[0].read()` would fail if `communicate()` returned.
Until the subprocess returns, `communicate()` aggregates all of its stdout and stderr (considering that you provided `stdout=subprocess.PIPE` and `stderr=subprocess.PIPE` to the constructor). Only after the subprocess has terminated, you have access to what `communicate()` collected during the runtime of the subprocess.
If you would like to monitor a subprocess' output in real time, then `communicate()` is the wrong method. Run the subprocess, monitor it (within for example a loop) and interact with its `Popen.stdout` and `Popen.stderr` attributes (which are file-like objects then). @mgilson's answer shows you one way how to do it :) | 113 |
65,495,956 | I have searched far and wide, and have followed just about everything... I cannot figure out why this keeps happening to my Python package I've created. It's not a simple "install dependency and you're good" as it's my own project I am attempting to create.
Here's my file structure:
```
-jarvis-discord
--jarvis_discord_bot
---__init__.py
---jarvis.py
---config.py
---cogs
----__init__.py
----all the cogs are here
```
The error given:
```
++ PWD
line 3: PWD: command not found
export PYTHONPATH=
PYTHONPATH=
python3 jarvis_discord_bot/jarvis.py
Traceback (most recent call last):
File "/buddy/jarvis-discord/jarvis_discord_bot/jarvis.py", line 40, in <module>
from jarvis_discord_bot.cogs import (
ModuleNotFoundError: No module named 'jarvis_discord_bot'
```
I've tried creating a `pipenv` as well and have had no luck either. Same error as above. There's something wrong with how I'm setting up my Python environment... granted I'm also a newbie.
The weird thing, to top this all off, is that it runs locally on my own machine just fine. So I am at a complete and utter loss for what to do and could use some help and direction on where to go from here.
Thanks! | 2020/12/29 | [
"https://Stackoverflow.com/questions/65495956",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13002900/"
] | If you are using relative file paths, you have to use
`from .cogs import (`
because it jarvis.py can't see jarvis\_discord\_bot from one level below.
The . in front of cogs means that it is one level up. | Figured out what was the issue!
In my run file, I had to set `PYTHONPATH` from `PWD` to the actual folder of the project. Good luck to anyone reading this in the future! | 114 |
50,151,698 | i have two table like this:
```
table1
id(int) | desc(TEXT)
--------------------
0 | "desc1"
1 | "desc2"
table2
id(int) | table1_id(TEXT)
------------------------
0 | "0"
1 | "0;1"
```
i want to select data into table2 and replace table1\_id by the desc field in table1, when i have string with ';' separator it means i have multiple selections.
im able to do it for single selection like this
```
SELECT table1.desc
FROM table2 LEFT JOIN table1 ON table1.id = CAST(table2.table1_id as integer);
```
Output wanted with a SELECT on table2 where id = 1:
```
"desc"
------
"desc1, desc2"
```
Im using Postgresql10, python3.5 and sqlalchemy
I know how to do it by extracting data and processing it with python then query again but im looking for a way to do it with one SQL query.
PS: I cant modify the table2. | 2018/05/03 | [
"https://Stackoverflow.com/questions/50151698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5494686/"
] | You can convert the CSV value into an array, then join on that:
```
select string_agg(t1.descr, ',') as descr
from table2 t2
join table1 t1 on t1.id = any (string_to_array(t2.table1_id, ';')::int[])
where t2.id = 1
``` | That is really an abominable data design.
Consequently you will have to write a complicated query to get your desired result:
```
SELECT string_agg(table1."desc", ', ')
FROM table2
CROSS JOIN LATERAL regexp_split_to_table(table2.table1_id, ';') x(d)
JOIN table1 ON x.d::integer = table1.id
WHERE table2.id = 1;
string_agg
--------------
desc1, desc2
(1 row)
``` | 115 |
64,791,458 | Here is my docker-compose.yml used to create the database container.
```
version: '3.7'
services:
application:
build:
context: ./app
dockerfile: dockerfile #dockerfile-prod
depends_on:
- database_mongo
- database_neo4j
- etl_pipeline
environment:
- flask_env=dev #flask_env=prod
volumes:
- ./app:/app
ports:
- "8080:8080" #- 8080:8080
database_mongo:
image: "mongo:4.2"
expose:
- 27017
volumes:
- ./data/database/mongo:/data/db
database_neo4j:
image: neo4j:latest
expose:
- 27018
volumes:
- ./data/database/neo4j:/data
ports:
- "7474:7474" # web client
- "7687:7687" # DB default port
environment:
- NEO4J_AUTH=none
etl_pipeline:
depends_on:
- database_mongo
- database_neo4j
build:
context: ./data/etl
dockerfile: dockerfile #dockerfile-prod
volumes:
- ./data/:/data/
- ./data/etl:/app/
```
I'm trying to connect to my neo4j database with python driver. I have already been able to connect to mongoDb with this line:
```
mongo_client = MongoClient(host="database_mongo")
```
I'm trying to do something similar to the mongoDb to connect to my neo4j with the GraphDatabase in neo4j like this:
```
url = "{scheme}://{host_name}:{port}".format(scheme = "bolt", host_name="database_neo4j", port = 7687)
baseNeo4j = GraphDatabase.driver(url, encrypted=False)
```
or with py2neo like this
```
neo_client = Graph(host="database_neo4j")
```
However, nothing of this has worked yet and so I'm not sure if I'm using the right syntax in order to use neo4j with docker. I've tried many things and looked around, but couldn't find the answer...
The whole error message is:
```
etl_pipeline_1 | MongoClient(host=['database_mongo:27017'], document_class=dict, tz_aware=False, connect=True)
etl_pipeline_1 | Traceback (most recent call last):
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 929, in _connect
etl_pipeline_1 | s.connect(resolved_address)
etl_pipeline_1 | ConnectionRefusedError: [Errno 111] Connection refused
etl_pipeline_1 |
etl_pipeline_1 | During handling of the above exception, another exception occurred:
etl_pipeline_1 |
etl_pipeline_1 | Traceback (most recent call last):
etl_pipeline_1 | File "main.py", line 26, in <module>
etl_pipeline_1 | baseNeo4j = GraphDatabase.driver(url, encrypted=False)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 183, in driver
etl_pipeline_1 | return cls.bolt_driver(parsed.netloc, auth=auth, **config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 196, in bolt_driver
etl_pipeline_1 | return BoltDriver.open(target, auth=auth, **config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 359, in open
etl_pipeline_1 | pool = BoltPool.open(address, auth=auth, pool_config=pool_config, workspace_config=default_workspace_config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 531, in open
etl_pipeline_1 | seeds = [pool.acquire() for _ in range(pool_config.init_size)]
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 531, in <listcomp>
etl_pipeline_1 | seeds = [pool.acquire() for _ in range(pool_config.init_size)]
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 545, in acquire
etl_pipeline_1 | return self._acquire(self.address, timeout)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 409, in _acquire
etl_pipeline_1 | connection = self.opener(address, timeout)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 528, in opener
etl_pipeline_1 | return Bolt.open(addr, auth=auth, timeout=timeout, routing_context=routing_context, **pool_config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 198, in open
etl_pipeline_1 | keep_alive=pool_config.keep_alive,
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 1049, in connect
etl_pipeline_1 | raise last_error
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 1039, in connect
etl_pipeline_1 | s = _connect(resolved_address, timeout, keep_alive)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 943, in _connect
etl_pipeline_1 | raise ServiceUnavailable("Failed to establish connection to {!r} (reason {})".format(resolved_address, error))
etl_pipeline_1 | neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv4Address(('172.29.0.2', 7687)) (reason [Errno 111] Connection refused)
``` | 2020/11/11 | [
"https://Stackoverflow.com/questions/64791458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14620901/"
] | Usually languages implement functionalities as simply as possible.
Class methods are under the hood just simple functions containing object pointer as an argument, where object in fact is just data structure + functions that can operate on this data structure.
Normally compiler knows which function should operate on the object.
However if there is a case of polymorphism where function may be overriden.
Then compiler doesn't know what is the type of class, it may be Derived1 or Derived2.
Then compiler will add a VTable to this object that will contain function pointers to functions that could have been overridden.
Then for overridable methods the program will make a lookup in this table to see which function should be executed.
You can see how it can be implemented by seeing how polymorphism can be implemented in C:
[How can I simulate OO-style polymorphism in C?](https://stackoverflow.com/questions/524033/how-can-i-simulate-oo-style-polymorphism-in-c) | No, it does not. Functions are class-wide. When you allocate an object in C++ it will contain space for all its attributes plus a VTable with pointers to all its methods/functions, be it from its own class or inherited from parent classes.
When you call a method on that object, you essentially perform a look-up on that VTable and the appropriate method is called. | 116 |
45,155,336 | I am running Ubuntu Desktop 16.04 on a VM and am trying to run [Volttron](https://github.com/VOLTTRON/volttron) using the standard install instructions, however I keep getting an error after the following steps:
```
sudo apt-get update
sudo apt-get install build-essential python-dev openssl libssl-dev libevent-dev git
git clone https://github.com/VOLTTRON/volttron
cd volttron
python bootstrap.py
```
My problem is with the last step `python bootstrap.py`. As soon as I get to this step, I get the error `bootstrap.py: error: refusing to run as root to prevent potential damage.` from my terminal window.
Has anyone else encountered this problem? Thoughts? | 2017/07/17 | [
"https://Stackoverflow.com/questions/45155336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8322226/"
] | I would recommend passing in the name of the value you would like to update into the handle change function, for example:
```
import React, { Component } from 'react'
import { Dropdown, Grid } from 'semantic-ui-react'
class DropdownExampleRemote extends Component {
componentWillMount() {
this.setState({
optionsMembers: [
{ key: 1, text: 'DAILY', value: 'DAILY' },
{ key: 2, text: 'MONTHLY', value: 'MONTHLY' },
{ key: 3, text: 'WEEKLY', value: 'WEEKLY' },
],
optionsDays: [
{ key: 1, text: 'SUNDAY', value: 'SUNDAY' },
{ key: 2, text: 'MONDAY', value: 'MONDAY' },
{ key: 3, text: 'TUESDAY', value: 'TUESDAY' },
],
value: '',
member: '',
day: '',
})
}
handleChange = (value, key) => {
this.setState({ [key]: value });
}
render() {
const {optionsMembers, optionsDays, value, member, day } = this.state
return (
<Grid>
<Grid.Column width={6}>
<Dropdown
selection
options={optionsMembers}
value={member}
placeholder='Select Member'
onChange={(e,{value})=>this.handleChange(value, 'member')}
/>
</Grid.Column>
<Grid.Column width={6}>
<Dropdown
selection
options={optionsDays}
value={day}
placeholder='Select Day'
onChange={(e,{value})=>this.handleChange(value, 'day')}
/>
</Grid.Column>
<Grid.Column width={4}>
<div>{member}</div>
<div>{day}</div>
</Grid.Column>
</Grid>
)
}
}
export default DropdownExampleRemote
``` | Something along these lines can maybe work for you.
```
handleChange = (propName, e) => {
let state = Object.assign({}, state);
state[propName] = e.target.value;
this.setState(state)
}
```
You can pass in the name of the property you want to update and then use bracket notation to update that part of your state.
Hope this helps. | 117 |
53,435,428 | After reading all the existing post related to this issue, i still did not manage to fix it.
```
ModuleNotFoundError: No module named 'plotly'
```
I have tried all the following:
```
pip3 install plotly
pip3 install plotly --upgrade
```
as well as uninstalling plotly with:
```
pip3 uninstall plotly
```
And reinstalling it again, i get the following on terminal:
```
Requirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.11.0)
Requirement already satisfied, skipping upgrade: nbformat>=4.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from plotly) (1.3.3)
Requirement already satisfied, skipping upgrade: urllib3<1.25,>=1.21.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (1.24.1)
Requirement already satisfied, skipping upgrade: idna<2.8,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2.7)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from requests->plotly) (2018.10.15)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter-core in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.4.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (4.3.2)
Requirement already satisfied, skipping upgrade: ipython-genutils in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (from nbformat>=4.2->plotly) (0.2.0)
import plotly
import plotly.plotly as py
```
yield:
```
ModuleNotFoundError: No module named 'plotly'
```
my version of pip(3) as well as python(3) seem to be both fine
May somebody please help?
Using Python3 on Atom 1.32.2 x64 | 2018/11/22 | [
"https://Stackoverflow.com/questions/53435428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10438271/"
] | Just run this to uninstall plotly and then build it from source. That should fix the import
```
pip uninstall plotly && python -m pip install plotly
``` | That sounds like a classic dependency issue.
* Check that your pip version is using the same python version (3.6) as what you launch your script with (IE: Use `python3(.6)` to launch your script, not just `python`)
* Your logs aren't showing plotly already installed. In fact, you probably forgot a line when pasting but installing with `pip3.6 install -U plotly` should install the package if not already installed. | 118 |
73,646,583 | In short, is there a pythonic way to write `SETTING_A = os.environ['SETTING_A']`?
I want to provide a module `environment.py` from which I can import constants that are read from environment variables.
##### Approach 1:
```
import os
try:
SETTING_A = os.environ['SETTING_A']
SETTING_B = os.environ['SETTING_B']
SETTING_C = os.environ['SETTING_C']
except KeyError as e:
raise EnvironmentError(f'env var {e} is not defined')
```
##### Approach 2
```
import os
vs = ('SETTING_A', 'SETTING_B', 'SETTING_C')
try:
for v in vs:
locals()[v] = os.environ[v]
except KeyError as e:
raise EnvironmentError(f'env var {e} is not defined')
```
Approach 1 repeats the names of the variables, approach 2 manipulates `locals` and it's harder to see what constants will be importable from the module.
Is there a best practice to this problem? | 2022/09/08 | [
"https://Stackoverflow.com/questions/73646583",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10909217/"
] | You should describe the type of PersonDto:
```js
interface PersonDto {
id: string;
name: string;
country: string;
}
class Person {
private id: string;
private name: string;
private country: string;
constructor(personDto: PersonDto) {
this.id = personDto.id;
this.name = personDto.name;
this.country = personDto.country;
}
}
const data = {
"id": "1234fc8-33aa-4a39-9625-b435479e6328",
"name": "02_Aug 10:00",
"country": "UK"
};
const person = new Person(data);
console.log(person);
```
In a case you a sure that all PersonDto properties are string - you can simplify the type description:
`type PersonDto = { [key: string]: string };` | Try [`Object.assign`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) to not have to type every property.
```typescript
interface PersonDto {
id: string;
name: string;
country: string;
}
class Person {
private id: string;
private name: string;
private country: string;
constructor(personDto: PersonDto) {
Object.assign(this, personDto);
}
}
const data = {
id: "1234fc8-33aa-4a39-9625-b435479e6328",
name: "02_Aug 10:00",
country: "UK"
};
const person = new Person(data);
console.log(person);
``` | 128 |
21,890,220 | tried multiplication of 109221975\*123222821 in python 2.7 prompt in two different ways
```
Python 2.7.3 (default, Sep 26 2013, 20:08:41)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 109221975*123222821
13458639874691475L
>>> 109221975*123222821.0
1.3458639874691476e+16
>>> int(109221975.0*123222821.0)
13458639874691476L
>>> 109221975*123222821 == int(109221975.0*123222821.0)
False
>>>
```
what I am suspecting here is that there is some precision inconsistency which is causing such problem , is it possible to speculate when can inconsistency like this happen ? | 2014/02/19 | [
"https://Stackoverflow.com/questions/21890220",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1955093/"
] | Your `int` is 54 bits long. `float` can hold 53 significant digits, so effectively the last digit is rounded to an even number.
Internally, your float is represented as:
>
> 2225720309975242\*2-1
>
>
>
Your `int` and `float` is stored in binary like the following:
```
101111110100001000111111001000111001000001000110010011
0 10000110100 0111111010000100011111100100011100100000100011001010
```
For `float`, the first part is the **sign**, the second is the **exponent**, and the third is the **significand**. Because space is allocated for an exponent, there isn't enough room left over for the significant digits
How I aligned the two representations, you can see the data is the same, but the `int` needs one extra digit on the right, while the `float` uses more (in this case wasted) space on the left | Because `int` in python has infinite precision, but `float` does not. (`float` is a double precision floating point number, which has 53 bits of precision.) | 129 |
66,395,018 | I am new to python. at the moment I am coding a game with a friend. we are currently working on a combat system the only problem is we don't know how to update the the enemy's health once damage has been dealt. The code is as following.
```
enemy1_health = 150
broadsword_attack = 20
rusty_knife = 10.5
attacks = ["broadsword swing " + str(broadsword_attack), "rusty knife jab " + str(rusty_knife) ]
while enemy1_health > 0:
while player_health > 0:
enemy1_health = 150
print(attacks)
attackchoice = input("Choose an attack: ")
if attackchoice == ("broadsword swing"):
print (int(enemy1_health - 20))
if attackchoice == ("rusty knife jab"):
print (int(enemy1_health - 10.5))
print("you died")
quit()
print("you cleared the level")```
``` | 2021/02/27 | [
"https://Stackoverflow.com/questions/66395018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15293735/"
] | You need to change the enemy health outside of the print statement with a statement like this:
```
enemy1_health = enemy1_health - 20
```
or like this, which does the same thing:
```
enemy1_health -= 20
```
You also reset enemy1\_health every time the loop loops, remove that.
You don't define player\_health, define that.
Your loop goes forever until you die.
So your code should end up looking more like this:
```
enemy1_health = 150
broadsword_attack = 20
rusty_knife = 10.5
player_health = 100
attacks = ["broadsword swing " + str(broadsword_attack), "rusty knife jab " + str(rusty_knife)]
while enemy1_health > 0:
print(attacks)
attackchoice = input("Choose an attack: ")
if attackchoice == ("broadsword swing"):
enemy1_health -= 20
if attackchoice == ("rusty knife jab"):
enemy1_health -= 10.5
print(enemy1_health)
if player_health <= 0:
print("you died")
quit()
print("you cleared the level")
```
This still requires quite a bit of tweaking, it'd be a complete working game if it was like this (basically, you win if you spam broadsword attacks because they do more damage):
```
enemy1_health = 150
enemy1_attack = 10
player_health = 100
broadsword_attack = 20
rusty_knife = 10.5
attacks = ["broadsword swing " + str(broadsword_attack), "rusty knife jab " + str(rusty_knife)]
while enemy1_health > 0:
print(attacks)
attackchoice = input("Choose an attack: ")
if attackchoice == ("broadsword swing"):
enemy1_health -= broadsword_attack
if attackchoice == ("rusty knife jab"):
enemy1_health -= rusty_knife
print(f'A hit! The enemy has {enemy1_health} health left.')
if enemy1_health > 0:
player_health -= enemy1_attack
print(f'The enemy attacks and leaves you with {player_health} health.')
if player_health <= 0:
print("you died")
quit()
print("you cleared the level")
``` | You need to change the enemy health outside the print statement.
do:
```
if attackchoice == ("rusty knife jab"):
enemy1_health = enemy1_health - 10.5
print(enemy1_health)
```
and you can do the same for the other attacks.
You also have enemy health defined in the while loop. you need to define it outside of the loop. | 131 |
44,659,242 | During development of Pylint, we encountered [interesting problem related to non-dependency that may break `pylint` package](https://github.com/PyCQA/pylint/issues/1318).
Case is following:
* `python-future` had a conflicting alias to `configparser` package. [Quoting official docs](http://python-future.org/whatsnew.html#what-s-new-in-version-0-16-0-2016-10-27):
>
> This release removes the configparser package as an alias for ConfigParser on Py2 to improve compatibility with Lukasz Langa’s backported configparser package. Previously python-future and the configparser backport clashed, causing various compatibility issues. (Issues #118, #181)
>
>
>
* `python-future` itself **is not** a dependency of Pylint
What would be a standard way to enforce *if python-future is present, force it to 0.16 or later* limitation? I want to avoid defining dependency as `future>=0.16` - by doing this I'd force users to install package that they don't need and won't use in a general case. | 2017/06/20 | [
"https://Stackoverflow.com/questions/44659242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2912340/"
] | ```
kw = {}
try:
import future
except ImportError:
pass
else:
kw['install_requires'] = ['future>=0.16']
setup(
…
**kw
)
``` | One workaround for this issue is to define this requirement only for the `all` target, so only if someone adds `pylint[all]>=1.2.3` as a requirement they will have futures installed/upgraded.
At this moment I don't know another way to "ignore or upgrade" a dependency.
Also, I would avoid adding Python code to `setup.py` in order to make it "smart",... is a well known distribution anti-pattern ;) | 133 |
37,369,079 | I have a lab colorspace
[![enter image description here](https://i.stack.imgur.com/3pXgm.png)](https://i.stack.imgur.com/3pXgm.png)
And I want to "bin" the colorspace in a grid of 10x10 squares.
So the first bin might be (-110,-110) to (-100,-100) then the next one might be (-100,-110) to (-90,-100) and so on. These bins could be bin 1 and bin 2
I have seen np.digitize() but it appears that you have to pass it 1-dimensional bins.
A rudimentary approach that I have tried is this:
```
for fn in filenames:
image = color.rgb2lab(io.imread(fn))
ab = image[:,:,1:]
width,height,d = ab.shape
reshaped_ab = np.reshape(ab,(width*height,d))
print reshaped_ab.shape
images.append(reshaped_ab)
all_abs = np.vstack(images)
all_abs = shuffle(all_abs,random_state=0)
sns
df = pd.DataFrame(all_abs[:3000],columns=["a","b"])
top_a,top_b = df.max()
bottom_a,bottom_b = df.min()
range_a = top_a-bottom_a
range_b = top_b-bottom_b
corner_a = bottom_a
corner_b = bottom_b
bins = []
for i in xrange(int(range_a/10)):
for j in xrange(int(range_b/10)):
bins.append([corner_a,corner_b,corner_a+10,corner_b+10])
corner_b = bottom_b+10
corner_a = corner_a+10
```
but the "bins" that results seem kinda sketchy. For one thing there are many empty bins as the color space does have values in a square arrangement and that code pretty much just boxes off from the max and min values. Additionally, the rounding might cause issues. I am wondering if there is a better way to do this? I have heard of color histograms which count the values in each "bin". I don't need the values but the bins are I think what I am looking for here.
Ideally the bins would be an object that each have a label. So I could do bins.indices[0] and it would return the bounding box I gave it. Then also I could bin each observation, like if a new color was color = [15.342,-6.534], color.bin would return 15 or the 15th bin.
I realize this is a lot to ask for, but I think it must be a somewhat common need for people working with color spaces. So is there any python module or tool that can accomplish what I'm asking? How would you approach this? thanks! | 2016/05/21 | [
"https://Stackoverflow.com/questions/37369079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1123905/"
] | The answer is to not use SSTATE\_DUPWHITELIST for this at all. Instead, in the libftdi recipe's do\_install (or do\_install\_append, if the recipe itself doesn't define its own do\_install) you should delete the duplicate files from within ${D} and then they won't get staged and the error won't occur. | I managed to solve this problem by adding the SSTATE\_DUPWHITELIST to the bitbake recipe of the package as follows:
SSTATE\_DUPWHITELIST = "${TMPDIR}/PATH/TO/THE/FILES"
I added the absolute path of all of the 6,7 files that had the conflict to the list. I did that because they were basically coming from a same source and it was all safe to do that. correct me if there is a better way though.
Hope this helps someone! | 135 |
70,008,841 | I was able to follow this example1 and let my ec2 instance read from S3.
In order to write to the same bucket I thought changing line 572 from `grant_read()` to `grant_read_write()`
should work.
```py
...
# Userdata executes script from S3
instance.user_data.add_execute_file_command(
file_path=local_path
)
# asset.grant_read(instance.role)
asset.grant_read_write(instance.role)
...
```
Yet the documented3 function cannot be accessed according to the error message.
```
>> 57: Pyright: Cannot access member "grant_read_write" for type "Asset"
```
What am I missing?
---
1 <https://github.com/aws-samples/aws-cdk-examples/tree/master/python/ec2/instance>
2 <https://github.com/aws-samples/aws-cdk-examples/blob/master/python/ec2/instance/app.py#L57>
3 <https://docs.aws.amazon.com/cdk/latest/guide/permissions.html#permissions_grants> | 2021/11/17 | [
"https://Stackoverflow.com/questions/70008841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1172907/"
] | This is the [documentation](https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_s3_assets/Asset.html) for Asset:
>
> An asset represents a local file or directory, which is automatically
> uploaded to S3 and then can be referenced within a CDK application.
>
>
>
The method grant\_read\_write isn't provided, as it is pointless. The documentation you've linked doesn't apply here. | an asset is just a Zip file that will be uploded to the bootstraped CDK s3 bucket, then referenced by Cloudformation when deploying.
if you have an script you want ot put into an s3 bucket, you dont want to use any form of asset cause that is a zip file. You would be better suited using a boto3 command to upload it once the bucket already exists, or making it part of a codePipeline to create the bucket with CDK then the next step in the pipeline uploads it.
the grant\_read\_write is for `aws_cdk.aws_s3.Bucket` constructs in this case. | 138 |
2,433,703 | I am running Cygwin Python version 2.5.2.
I have a three-line source file, called import.py:
```
#!/usr/bin/python
import xml.etree.ElementTree as ET
print "Success!"
```
When I execute "python import.py", it works:
```
C:\Temp>python import.py
Success!
```
When I run the python interpreter and type the commands, it works:
```
C:\Temp>python
Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14)
[GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
>>> #!/usr/bin/python
... import xml.etree.ElementTree as ET
>>> print "Success!"
Success!
>>>
```
But when I execute "import.py', it does not work:
```
C:\Temp>which python
/usr/bin/python
C:\Temp>import.py
Traceback (most recent call last):
File "C:\Temp\import.py", line 2, in ?
import xml.etree.ElementTree as ET
ImportError: No module named etree.ElementTree
```
When I remove the first line (#!/usr/bin/python), I get the same error. I need that line in there, though, for when this script runs on Linux. And it works fine on Linux.
Any ideas?
Thanks. | 2010/03/12 | [
"https://Stackoverflow.com/questions/2433703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5397/"
] | Probably py extension is connected to some other python interpreter than the one in /usr/bin/python | Try:
```
./import.py
```
Most people don't have "." in their path.
just typing python will call the cygwin python.
import.py will likely call whichever python is associated with .py files under windows.
You are using two different python executables. | 139 |
1,550,226 | I have installed a python package with `python setup.py install`.
How do I uninstall it? | 2009/10/11 | [
"https://Stackoverflow.com/questions/1550226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/63051/"
] | At `{virtualenv}/lib/python2.7/site-packages/` (if not using virtualenv then `{system_dir}/lib/python2.7/dist-packages/`)
* Remove the egg file (e.g. `distribute-0.6.34-py2.7.egg`)
* If there is any from file `easy-install.pth`, remove the corresponding line (it should be a path to the source directory or of an egg file). | **Install from local**
`python setup.py install`
**Uninstall from local**
`pip uninstall mypackage` | 144 |
49,093,290 | I'm trying to install Python 3 alongside 2.7 with Homebrew but am receiving an error message I can't find a resolution to.
When attempting `brew update && brew install python3` I get the following error:
```
Error: python 2.7.12_2 is already installed
To upgrade to 3.6.4_3, run `brew upgrade python`
```
I want to leave the python 2.7 installation alone so I can have both Python 2 & 3 accessible on my machine so I'm nervous that upgrading will overwrite the current 2.7 installation.
I figure I can still perform a clean side-by-side install with the package from python.org, but I want to know why I'm getting this homebrew error
`brew doctor` shows the following Warnings containing python
```
Warning: "config" scripts exist outside your system or Homebrew directories.
`./configure` scripts often look for *-config scripts to determine if
software packages are installed, and what additional flags to use when
compiling and linking.
Having additional scripts in your path can confuse software installed via
Homebrew if the config script overrides a system or Homebrew provided
script of the same name. We found the following "config" scripts:
/Library/Frameworks/Python.framework/Versions/2.7/bin/python-config
/Library/Frameworks/Python.framework/Versions/2.7/bin/python2-config
/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7-config
Warning: Python is installed at /Library/Frameworks/Python.framework
Homebrew only supports building against the System-provided Python or a
brewed Python. In particular, Pythons installed to /Library can interfere
with other software installs.
Warning: Some installed formulae are missing dependencies.
You should `brew install` the missing dependencies:
brew install python@2
``` | 2018/03/04 | [
"https://Stackoverflow.com/questions/49093290",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3673055/"
] | You can press F9 inside [7zip](https://7zipguides.com/), you'll get two panes. In the first, you navigate to the archive you want to extract, and in the second you navigate to the folder where you want your files extracted. This will skip the temp folder step... | you can change **root** value in config/filesystems.php
```
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
``` | 154 |
69,476,449 | I was working with two instances of a python class when I realize they where using the same values. I think I have a missunderestanding of what classes are used for.
A much simpler example:
```
class C():
def __init__(self,err = []):
self.err = err
def add(self):
self.err.append(0)
a = C()
print(a.err) # []
a.add()
print(a.err) # [0]
b = C()
print(b.err) # [0]
b.add()
print(a.err) # [0,0]
print(b.err) # [0,0]
```
I don't underestand why b.err starts as [0] instead of []. And why when adding an element to b it affects a too. | 2021/10/07 | [
"https://Stackoverflow.com/questions/69476449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11934583/"
] | The reason is here:
`def __init__(self,err = []):`
default `err` value is saved inside class `C`. But `err` itself is mutable, so every time you append anything to it, next time it will have stored value and this default `err` value is saved as `a.err` and `b.err`:
```
a = C()
print(a.err) # a.err is err ([])
a.add()
print(a.err) # err is [0]
b = C()
print(b.err) # reused err that is [0]
b.add() # err is [0, 0]
print(a.err) # [0,0]
print(b.err) # [0,0]
```
So basically `err` inside `a` and `b` is the same
Article: <https://florimond.dev/en/posts/2018/08/python-mutable-defaults-are-the-source-of-all-evil/> | I recommend that you check the Python core language *features* first. Check the official FAQs for Python 3, particularly <https://docs.python.org/3/faq/programming.html#why-are-default-values-shared-between-objects> is what you are looking for.
According to the recommendations, you have to change your code like so
```py
from typing import List
class C():
def __init__(self,err: List = None):
self.err = [] if err is None else err
def add(self):
self.err.append(0)
a = C()
print(a.err) # []
a.add()
print(a.err) # [0]
b = C()
print(b.err) # []
b.add()
print(a.err) # [0]
print(b.err) # [0]
```
I will also link to the concept of mutability in the docs as it seems that this was the issue for OP: <https://docs.python.org/3/glossary.html#term-mutable> | 155 |
46,906,854 | I just started to bash and I have been stuck for sometime on a simple if;then statement.
I use bash to run QIIME commands which are written in python. These commands allow me to deal with microbial DNA. From the raw dataset from the sequencing I first have to first check if they match the format that QIIME can deal with before I can proceed to the rest of the commands.
```
module load QIIME/1.9.1-foss-2016a-Python-2.7.11
echo 'checking mapping file and demultiplexing'
validate_mapping_file.py -m $PWD/map.tsv -o $PWD/mapcheck > tmp.txt
n_words=`wc -w tmp.txt`
echo "n_words:"$n_words
if [ n_words = '9 temp.txt' ];then
split_libraries_fastq.py -i $PWD/forward_reads.fastq.gz -b $PWD/barcodes.fastq.gz -m $PWD/map.tsv -o $PWD/demultiplexed
else
echo 'Error(s) in map'
exit 1
fi
```
If the map is good I expect the following output (9 words):
```
No errors or warnings were found in mapping file.
```
If it is bad (16 words):
```
Errors and/or warnings detected in mapping file. Please check the log and html file for details.
```
I want to used this output to condition the following commands split\_libraries\_fastq.py.
I tried many different version of the if;then statement, asked help around but nothing seems to be working.
Anyone of you had an idea of why the 'then' command is not ran?
Also I run it through a cluster.
Here is the output when my map is good, the second command is not ran:
```
checking mapping file and demultiplexing
n_words:9 tmp.txt
Error(s) in map
```
Thanks | 2017/10/24 | [
"https://Stackoverflow.com/questions/46906854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8759792/"
] | **Key idea :** You can add `UITapGestureRecognizer` to `UIImageView`. Setting up a `selector` which will be fired for each tap. In the `selector` you can check for the co-ordinate where the tap was done. If the co-ordinate satisfy your condition for firing up an event, you can execute your task then.
**Adding the gesture recognizer:**
```
UITapGestureRecognizer *singleTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleImgViewTap:)];
[singleTap setNumberOfTapsRequired:1];
[yourImgView addGestureRecognizer:singleTap];
```
**Setting up the selector:**
```
-(void)handleImgViewTap:(UITapGestureRecognizer *)gestureRecognizer
{
// this method gonna fire everytime you tap on the
// image view. you have to check does the point where
// the tap was done, satisfy your path/area condition.
CGPoint point = [gestureRecognizer locationInView:yourImgView];
// here point.x and point.y is the location of the tap
// inside your image view.
if(/*your condition goes here*/)
{
// execute your staff here.
}
}
```
Hope it helps, Happy ios coding. | Given a view (ora imageview) you should define a UIBezierPath of your shape.
Add a taprecognizer to this view, and set the same view as the recognizer delegate.
In the delegate method use UIBezierPath.contains(\_:) to know if the tap is inside the path or not and decide to fire the tap event or not.
Let me know if you need code example. | 158 |
42,553,713 | Currently, I have some issue with Xcode and the proccess **IBDesignablesAgentCocoaTouch** freeze Xcode each time I edit Storyboard.
So, I want to kill this proccess with a bash or python script by checking every x seconds if this proccess is running.
I think I can use this script, but how to do with a timer ( each x seconds checking ? )
```
pid=$(ps -fe | grep 'IBDesignablesAgentCocoaTouch' | awk '{print $2}')
if [[ -n $pid ]]; then
kill $pid
else
echo "Does not exist"
fi
``` | 2017/03/02 | [
"https://Stackoverflow.com/questions/42553713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4824110/"
] | Just use a while loop,
```
while sleep 20; do
pid=$(ps -fe | grep 'IBDesignablesAgentCocoaTouch' | awk '{print $2}')
if [[ -n $pid ]]; then
kill $pid
else
echo "Does not exist"
fi
done
```
The syntax `while sleep 20; do <code>` is similar to the one showed in comments `while true; do sleep 20 <code>`, except saving a few keystrokes. | Use this **If the process is named IBDesignablesAgentCocoaTouch**:
```
kill $(pgrep -x IBDesignablesAgentCocoaTouch)
```
If the process exists it will get killed, if not nothing will happen.
`pgrep` will get PID for you.
```
#!/bin/bash
while sleep 20; do
kill $(pgrep IBDesignablesAgentCocoaTouch)
done
```
If you dont want to use sleep, You can use `cron`. | 159 |
44,036,372 | Could anyone tell me what files I should download and which statements I must execute in the command line to install Matplotlib?
I have Python 2.7.13 on Windows 10 64 bit.
These are the files I unzipped:
![enter image description here](https://i.stack.imgur.com/su4yF.jpg)
All downloaded from: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>
Commands I executed:
```
python -m pip install -U pip setuptools
python -m pip install matplotlib
python -m pip install -U pip
```
I am getting these two errors when checking if Numpy and Matplotlib are installed.
```
>>> import numpy
**Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
import numpy
File "numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "numpy\core\__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: DLL load failed: %1 no es una aplicación Win32 válida.**
>>> import matplotlib
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
import matplotlib
File "matplotlib\__init__.py", line 122, in <module>
from matplotlib.cbook import is_string_like, mplDeprecation, dedent, get_label
File "matplotlib\cbook.py", line 33, in <module>
import numpy as np
File "numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "numpy\core\__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: DLL load failed: %1 no es una aplicación Win32 válida.
``` | 2017/05/17 | [
"https://Stackoverflow.com/questions/44036372",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5513436/"
] | Instead of iterating over a simple list of strings. You can store the section along with it's target element as an object then iterate.
```
<div id="introDiv"></div>
<div id="aboutDiv"></div>
<div id="linksDiv"></div>
var sections = [
{ section: "intro", target: "introDiv" },
{ section: "about", target: "aboutDiv" },
{ section: "links", target: "linksDiv" }
];
$.each(sections, function(index, value) {
$.ajax({
url: "/data/" + index,
method: "get"
})
.then(function(result) {
$("#" + value.target).html(result);
});
});
```
You can dynamically create the element too but I made them static to illustrate the mapping. You'll also need a delegate to find the dynamically created element.
If you want the numeric id... you don't even need to know the target since it's all being created on-the-fly.
```
var sections = ["intro", "about", "links"];
$.each(sections, function(index, value) {
$.ajax({
url: "/data/" + index,
method: "get"
})
.then(function(result) {
var div = $("<div></div>").attr({ id: "id_" + index });
div.html(result);
$("#page").append(div);
});
});
```
But you can't guarantee the order of the responses -- that's the nature of the asynchronous requests. | Just set the Ajax to run synchronously, so the each loop will wait for your Ajax to finish before incrementing `counter`.
```
var counter = 1;
["intro","about","links"].each( function (index) {
var frag='<div id="id_'+counter+'"></div>\n";
$("#page").append(frag);
$.ajax({
url: "/data/"+index,
success: function (response) {
$("#id_"+counter).html(response.responseText);
},
async: false
});
counter++;
}
``` | 161 |
21,123,963 | I am trying to write a primes module in python. One thing I would like to be able to write is
```
>>> primes.primesLessThan(12)
[2, 3, 5, 7, 11]
```
However, I would also like to be able to write
```
>>> primes.primesLessThan.Sundaram(12)
[2, 3, 5, 7, 11]
```
to force it to use the Sieve of Sundaram. My original idea was to make primesLessThan a class with several static methods, but since \_\_init\_\_ can't return anything, this didn't let me achieve the first example. Would this be better done as a seperate module that primes imports or is there something else I missed? | 2014/01/14 | [
"https://Stackoverflow.com/questions/21123963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3195702/"
] | As a rule of thumb, if you have a class without any instance variables, an empty init method and just a bunch of static methods, then its probably going to be simpler to organize it as a module instead.
```
#sieves module
def Sundaram(n):
return [2,3,5,7]
def Eratosthenes(n):
return [2,3,5,7]
```
And then you can use the functions from the module
```
import primes.sieves
primes.sieves.Sundaram(12)
```
Finally, python functions are first class and can be passed around in function parameter or stored in data structures. This means that if you ever need to write some code that depends on an algorithm choice, you can just pass that as a parameter.
```
def test_first_primes(algorithm):
return algorithm(10) == [2,3,5,7]
print (test_first_primes(Sundaram))
print (test_first_primes(Eratosthenes))
``` | Two ways I can think of, to get these kinds of semantics.
* Make primes a class, and then make primesLessThan a property. It would also be a class, which implements `__iter__` etc. to simulate a list, while also having some subfunctions. primesLessThan would be a constructor to that class, with the argument having a default to allow passing through.
* Make primes itself support `__getitem__`/`__iter__`/etc. You can still use properties (with default), but make primesLessThan just set some internal variable in the class, and then return self. This lets you do them in any order i.e. primes.Sundaram.primesLessThan(12) would work the same way as primes.primesLessThan.Sundaram(12), though, that looks strange to me.
Either one of these are going to be a bit weird on the return values... you can create something that acts like a list, but it obviously won't be. You can have repr show it like a list, and you'll be able to iterate over like a list (i.e. `for prime in primes.Sundaram(12)`), but it can't return an actual list for obvious reasons.... | 164 |
42,230,269 | Searching for an alternative as OpenCV would not provide timestamps for **live** camera stream *(on Windows)*, which are required in my computer vision algorithm, I found ffmpeg and this excellent article <https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/>
The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.
Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.
I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected?
*Expected: It should write video frames to disk as well as print timestamp details.
Actual: It writes video files but does not get the timestamp (showinfo) details.*
Here's the code I tried:
```
import subprocess as sp
import numpy
import cv2
command = [ 'ffmpeg',
'-i', 'e:\sample.wmv',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo',
'-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
'-an','-sn', #-an, -sn disables audio and sub-title processing respectively
'-f', 'image2pipe', '-'] # we need to output to a pipe
pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???
for i in range(10):
raw_image = pipe.stdout.read(1280*720*3)
img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
print "showinfo output", img_info
image1 = numpy.fromstring(raw_image, dtype='uint8')
image2 = image1.reshape((720,1280,3))
# write video frame to file just to verify
videoFrameName = 'Video_Frame{0}.png'.format(i)
cv2.imwrite(videoFrameName,image2)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
pipe.stderr.flush()
```
So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm... | 2017/02/14 | [
"https://Stackoverflow.com/questions/42230269",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/468716/"
] | Redirecting stderr works in python.
So instead of this `pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE)`
do this `pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.STDOUT)`
We could avoid redirection by adding an asynchronous call to read both the standard streams (stdout and stderr) of ffmpeg. This would avoid any mixing of the video frame and timestamp and thus the error prone seperation.
So modifying the original code to use `threading` module would look like this:
```
# Python script to read video frames and timestamps using ffmpeg
import subprocess as sp
import threading
import matplotlib.pyplot as plt
import numpy
import cv2
ffmpeg_command = [ 'ffmpeg',
'-nostats', # do not print extra statistics
#'-debug_ts', # -debug_ts could provide timestamps avoiding showinfo filter (-vcodec copy). Need to check by providing expected fps TODO
'-r', '30', # output 30 frames per second
'-i', 'e:\sample.wmv',
'-an','-sn', #-an, -sn disables audio and sub-title processing respectively
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo',
#'-vcodec', 'copy', # very fast!, direct copy - Note: No Filters, No Decode/Encode, no quality loss
#'-vframes', '20', # process n video frames only. For Debugging
'-vf', 'showinfo', # showinfo videofilter provides frame timestamps as pts_time
'-f', 'image2pipe', 'pipe:1' ] # outputs to stdout pipe. can also use '-' which is redirected to pipe
# seperate method to read images on stdout asynchronously
def AppendProcStdout(proc, nbytes, AppendList):
while proc.poll() is None: # continue while the process is alive
AppendList.append(proc.stdout.read(nbytes)) # read image bytes at a time
# seperate method to read image info. on stderr asynchronously
def AppendProcStderr(proc, AppendList):
while proc.poll() is None: # continue while the process is alive
try: AppendList.append(proc.stderr.next()) # read stderr until empty
except StopIteration: continue # ignore stderr empty exception and continue
if __name__ == '__main__':
# run ffmpeg command
pipe = sp.Popen(ffmpeg_command, stdout=sp.PIPE, stderr=sp.PIPE)
# 2 threads to talk with ffmpeg stdout and stderr pipes
framesList = [];
frameDetailsList = []
appendFramesThread = threading.Thread(group=None, target=AppendProcStdout, name='FramesThread', args=(pipe, 1280*720*3, framesList), kwargs=None, verbose=None) # assuming rgb video frame with size 1280*720
appendInfoThread = threading.Thread(group=None, target=AppendProcStderr, name='InfoThread', args=(pipe, frameDetailsList), kwargs=None, verbose=None)
# start threads to capture ffmpeg frames and info.
appendFramesThread.start()
appendInfoThread.start()
# wait for few seconds and close - simulating cancel
import time; time.sleep(2)
pipe.terminate()
# check if threads finished and close
appendFramesThread.join()
appendInfoThread.join()
# save an image per 30 frames to disk
savedList = []
for cnt,raw_image in enumerate(framesList):
if (cnt%30 != 0): continue
image1 = numpy.fromstring(raw_image, dtype='uint8')
image2 = image1.reshape((720,1280,3)) # assuming rgb image with size 1280 X 720
# write video frame to file just to verify
videoFrameName = 'video_frame{0}.png'.format(cnt)
cv2.imwrite(videoFrameName,image2)
savedList.append('{} {}'.format(videoFrameName, image2.shape))
print '### Results ###'
print 'Images captured: ({}) \nImages saved to disk:{}\n'.format(len(framesList), savedList) # framesList contains all the video frames got from the ffmpeg
print 'Images info captured: \n', ''.join(frameDetailsList) # this contains all the timestamp details got from the ffmpeg showinfo videofilter and some initial noise text which can be easily removed while parsing
``` | You can use [MoviePy](http://zulko.github.io/moviepy/index.html):
```
import moviepy.editor as mpy
vid = mpy.VideoFileClip('e:\\sample.wmv')
for timestamp, raw_img in vid.iter_frames(with_times=True):
# do stuff
``` | 165 |
14,981,069 | How can I use [Brython](https://www.brython.info/) to compile Python to Javascript? I want to do this on my computer, so I can the run Javascript with nodejs, eg.
```
$ python hello.py
Hello world
$ brython hello.py -o hello.js
$ node hello.js
Hello world
```
The examples on the Brython website only explain how do this in the browser <http://www.brython.info/index_en.html> | 2013/02/20 | [
"https://Stackoverflow.com/questions/14981069",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/284795/"
] | It seems they are very browser oriented, there is no command line option out of the box.
You can try to use their code youself from node.js, perhaps it will work easily. It seems the `$py2js(src, module)` function does the actual conversion so maybe you can just run it with the python code string as first parameter.
Another option is to use pyjs: <http://pyjs.org/> which does something similar and has command line tool to do the conversion. | Brython has a console that runs in the browser, but not a compiler. It is meant for you to either import your python scripts into the html file, or write your python code into the html file. See pyjs if you wish a conversion tool before the page loads. | 168 |
41,460,013 | ```
#!/usr/bin/env python2.7
import vobject
abfile='/foo/bar/directory/file.vcf' #ab stands for address book
ablist = []
with open(abfile) as source_file:
for vcard in vobject.readComponents(source_file):
ablist.append(vcard)
print ablist[0]==ablist[1]
```
The above code should return True but it does not because the vcards are considered different even though they are the same. One of the ultimate objectives is to find a way to remove duplicates from the vcard file. Bonus points: Is there a way to make the comparison compatible with using one of the fast ways to uniqify a list in Python such as:
```
set(ablist)
```
to remove duplicates? (e.g. convert the vcards to strings somehow...). In the code above len(set(ablist)) returns 2 and not 1 as expected...
In contrast, if instead of comparing the whole vcard we compare one component of it as in:
```
print ablist[0].fn==ablist[1].fn
```
then we do see the expected behavior and receive True as response...
Here is the file contents used in the test (with only two identical vcards):
```
BEGIN:VCARD
VERSION:3.0
FN:Foo_bar1
N:;Foo_bar1;;;
EMAIL;TYPE=INTERNET:foobar1@foo.bar.com
END:VCARD
BEGIN:VCARD
VERSION:3.0
FN:Foo_bar1
N:;Foo_bar1;;;
EMAIL;TYPE=INTERNET:foobar1@foo.bar.com
END:VCARD
``` | 2017/01/04 | [
"https://Stackoverflow.com/questions/41460013",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5965670/"
] | @Brian Barcelona, concerning your answer, just to let you know, instead of:
```
ablist = []
with open(abfile) as source_file:
for vcard in vobject.readComponents(source_file):
ablist.append(vcard)
```
You could do:
```
with open(abfile) as source_file:
ablist = list(vobject.readComponents(source_file))
```
By the way, I have looked in the source code of this module and your solution is not guaranteed to work because different components of a vcard could be the same but not in the same order. I think the best way is for you to check each relevant component yourself. | I have found the following will work - the insight is to "serialize()" the vcard:
```
#!/usr/bin/env python2.7
import vobject
abfile='/foo/bar/directory/file.vcf' #ab stands for address book
ablist = []
with open(abfile) as source_file:
for vcard in vobject.readComponents(source_file):
ablist.append(vcard)
print ablist[0].serialize()==ablist[1].serialize()
```
However, there should be a better way to do this... any help would be most welcomed! | 171 |
32,652,485 | I'm trying to convert string date object to date object in python.
I did this so far
```
old_date = '01 April 1986'
new_date = datetime.strptime(old_date,'%d %M %Y')
print new_date
```
But I get the following error.
>
> ValueError: time data '01 April 1986' does not match format '%d %M %Y'
>
>
>
Any guess? | 2015/09/18 | [
"https://Stackoverflow.com/questions/32652485",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2728494/"
] | `%M` parses *minutes*, a numeric value, not a month. Your date specifies the month as `'April'`, so use `%B` to parse a *named* month:
```
>>> from datetime import datetime
>>> old_date = '01 April 1986'
>>> datetime.strptime(old_date,'%d %B %Y')
datetime.datetime(1986, 4, 1, 0, 0)
```
From the [*`strftime()` and `strptime()` Behavior* section](https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior):
>
> `%B`
>
> Month as locale’s full name.
>
> January, February, ..., December (en\_US);
>
> Januar, Februar, ..., Dezember (de\_DE)
>
>
> `%M`
>
> Minute as a zero-padded decimal number.
>
> 00, 01, ..., 59
>
>
> | You can first guess the type of date format the string is using and then convert to the same system recognised date format.
I wrote a simple date\_tools utilities that you can find here at [<https://github.com/henin/date_tools/]>
### Installation: pip install date-tools
### Usage:
>
> from date\_tools import date\_guesser
>
>
> from datetime import datetime
>
>
> old\_date = '01 April 1986'
>
>
> date\_format = date\_guesser.guess\_date\_format(old\_date)
>
>
> new\_date = datetime.strptime(old\_date, date\_format)
>
>
> print(new\_date)
>
>
> | 172 |
64,934,782 | I am trying to read JSON File but it gives error as below
*Data reference:
<https://github.com/ankitgoel1602/data-science/blob/master/json-data/level_1.json>
<https://github.com/ankitgoel1602/data-science/blob/master/json-data/multiple_levels.json>*
Code
```
with open("multiple_levels.json", 'r') as j:
contents = json.loads(j.read())
```
Error:
```
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-14-0fce326c8851> in <module>
1 with open("multiple_levels.json", 'r') as j:
----> 2 contents = json.loads(j.read())
~\AppData\Local\Continuum\anaconda3\lib\json\__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
346 parse_int is None and parse_float is None and
347 parse_constant is None and object_pairs_hook is None and not kw):
--> 348 return _default_decoder.decode(s)
349 if cls is None:
350 cls = JSONDecoder
~\AppData\Local\Continuum\anaconda3\lib\json\decoder.py in decode(self, s, _w)
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
~\AppData\Local\Continuum\anaconda3\lib\json\decoder.py in raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 7 column 1 (char 6)
``` | 2020/11/20 | [
"https://Stackoverflow.com/questions/64934782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5866905/"
] | You will want to use something like `map` instead
This is a simply change to your code:
```
formatedCharcters = data.results.map(character => {
``` | I am not sure that I completely understand your question, but here is one way you could achieve the result you are probably looking for. I have kept the forEach loop in case there is a specific reason for keeping it:
```
// Json data example
function getCharacters() {
const data = {
info: {
count: 671,
pages: 34,
next: 'https://rickandmortyapi.com/api/character?page=2',
prev: null,
},
results: [
{
id: 1,
name: 'Rick Sanchez',
status: 'Alive',
species: 'Human',
type: '',
gender: 'Male',
origin: {
name: 'Earth (C-137)',
url: 'https://rickandmortyapi.com/api/location/1',
},
location: {
name: 'Earth (Replacement Dimension)',
url: 'https://rickandmortyapi.com/api/location/20',
},
image: 'https://rickandmortyapi.com/api/character/avatar/1.jpeg',
episode: [
'https://rickandmortyapi.com/api/episode/1',
'https://rickandmortyapi.com/api/episode/2',
],
url: 'https://rickandmortyapi.com/api/character/1',
created: '2017-11-04T18:48:46.250Z'
},
{
id: 2,
name: 'second name',
status: 'Alive',
species: 'Human',
type: '',
gender: 'Female',
origin: {
name: 'Mars???',
url: 'sample-url.com/sample/example',
},
location: {
name: 'Mars??? (Replacement Dimension)',
url: 'sample-url.com/sample/example',
},
image: 'sample-url.com/sample/example',
episode: [
'sample-url.com/sample/example',
'sample-url.com/sample/example',
],
url: 'sample-url.com/sample/example',
created: '2019-12-04T11:48:46.250Z'
}
]
}
// here is the problem
const formattedCharacters = data.results;
const character_array = [];
formattedCharacters.forEach(character=>{
//here instead of returning multiple times, just push value into an array
character_array.push({
id: character.id,
name: character.name,
status: character.status,
species: character.species,
gender: character.gender,
location: character.location.name,
image: character.image
});
})
return character_array;
}
const characters = getCharacters();
// therefore:
const character_1 = characters[0];
console.log(character_1);
```
The above would produce an array of all the elements inside of `data.results` with the values you need.
Hope that helped, AlphaHowl. | 173 |
10,049,054 | I want it to give 'Number of donuts: many' when the argument is more than 10 and 'Number of donuts: ' when the argument is count<10. But it always gives me 'Number of donuts: many'
```
#!/usr/bin/python2.6 -tt
import sys
def donuts(count):
if count < 10:
x = str(count)
else:
x = 'many'
print 'Number of donuts: ' + x
def main():
donuts(sys.argv[1])
if __name__ == "__main__":
main()
``` | 2012/04/06 | [
"https://Stackoverflow.com/questions/10049054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1318217/"
] | `count` is a string. Try `count = int(count)` | You should convert count to an int before comparing it to 10:
```
#!/usr/bin/python2.6 -tt
import sys
def donuts(count):
if int(count) < 10:
x = count
else:
x = 'many'
print 'Number of donuts: %s' % x
def main():
donuts(sys.argv[1])
if __name__ == "__main__":
main()
``` | 175 |
28,656,559 | I am trying to build the \_pjsua C extension in windows, using Visual studio 2012.
I downloaded the source code from here - <http://www.pjsip.org/download.htm>.
I have Python27 installed, and have added the **C:\Python27\include** and the **C:\Python27\libs** directories to the project **include** and **library** directories.
I followed the instructions here - <https://trac.pjsip.org/repos/wiki/Python_SIP/Build_Install>.
In the **Microsoft Windows with Visual Studio** under **Step 1: Building the C Extension** its says:
```
Visual Studio 2005:
1. Open pjproject-vs8.sln from the PJSIP distribution directory.
2. Select either Debug or Release from the build configuration
Note: the Python module does not support other build configurations.
3. In Visual Studio, right click python_pjsua project from the Solution Explorer panel, and select Build from the pop-up menu.
Note: the python_pjsua project is not built by default if you build the solution, hence it needs to be built manually by right-clicking and select Build from the pop-up menu.
4. The _pjsua.pyd Python module will be placed in pjsip-apps\lib directory.
or in case of debug, it will be _pjsua_d.pyd
```
In step 3 (building the python\_pjsua project) I get error
```
pjsua error lnk1181 cannot open input file python24.lib
```
in the **C:/Python27/libs** I have file **python27.lib**.
Does this C extension works only with Python 2.4 (python24)??
thanks in advance | 2015/02/22 | [
"https://Stackoverflow.com/questions/28656559",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1662033/"
] | No, it not so.
You can use simple hack:
**Copy python27.lib** and rename it **to python24.lib**, then place it to **C:/Python27/libs** folder. Now you can build you extension, then run in cmd **python setup-vc.py install** command. | The right solution for this is:
1. Open python\_pjsua property pages (righ click->Properties);
2. Linker->Input->Additional Dependencies.
3. Change python24.lib to python27.lib (or python24\_d.lib to python27\_d.lib if debugging).
It should work and compile with no problem. | 182 |
33,326,193 | I need help finding a way to calculate the total cost of items when there is a change in the price once items go up to certain number in python 3.5.
For example,
First 6 items cost $8 each and after that, it costs $5 per item.
How can I achieve this without using an `if` statement and loop? | 2015/10/25 | [
"https://Stackoverflow.com/questions/33326193",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5426865/"
] | I would agree with the replies to this post concerning MCVE.
As for an answer to the question (to get the grader to accept your answer), remember that when inheriting the (Parent) `class Person` for (child) `class USResident`, (Parent) `class Person` will need to be initialized in (child) `class USResident` with:
`Person.__init__(self, name)`
So the code that gave me a correct answer was:
```
class USResident(Person):
"""
A Person who resides in the US.
"""
def __init__(self, name, status):
"""
Initializes a Person object. A USResident object inherits
from Person and has one additional attribute:
status: a string, one of "citizen", "legal_resident", "illegal_resident"
Raises a ValueError if status is not one of those 3 strings
"""
Person.__init__(self, name)
if status != 'citizen' and status != 'legal_resident' and \
status != 'illegal_resident':
raise ValueError()
else:
self.status = status
def getStatus(self):
"""
Returns the status
"""
return self.status
```
The final exam is over but you can go to Final Exam Code Graders in the sidebar of the course to check this code.
I started this course late so I just got to this question and I too was perplexed as to why I wasn't getting the "correct" output as well (for upwards of an hour!).
For those of you not in the course, here's a picture:
[![correct output for final exam problem 5](https://i.stack.imgur.com/lq9Fy.png)](https://i.stack.imgur.com/lq9Fy.png)
The course, for those who are interested, is "Introduction to Computer Science and Programming Using Python", or 6.00.1x, from [edX.org](https://www.edx.org/course/introduction-computer-science-mitx-6-00-1x-6) .
Unfortunately, only enrolled persons can access the code grader.
Cheers! | Actually it is very simple, just to test you if you can use a constant in the class.
Just like something: `STATUS = ("c", "i", "l")` and then raise the `ValueError` if the condition failed. | 183 |
42,689,852 | I'm trying to using the [Azure Python SDK](https://github.com/Azure/azure-sdk-for-python) to drive some server configuration management, but I'm having difficulty working out how I'm supposed to use the API to upload and configure SSL certificates.
I can successfully interrogate my Azure account to discovering the App Services that are available with the `WebSiteManagementClient`, and I can interrogate and manipulate DNS configurations using the `DnsManagementClient`.
I am also able to manually add an SSL certificate to an Azure App Service using [the instructions on the Azure website](https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-configure-ssl-certificate).
However, it isn't at all clear to me what API endpoints I should be using to install a custom SSL certificate.
If I've got a `WebSiteManagementClient` named `client`, then I can see that:
* `client.certificates.get_certificate()` allows me to get a specific certificate by name - but `client.certificates` doesn't appear to have an API to list all available certificates.
* `client.certificates.create_or_update_certificate()` allows me to presumably idempotently create/update a certificate - but it requires a `CertificateEnvelope` argument, and I can't see where that object should be created.
* Assuming I manually upload a certificate, I can't work out what API endpoint I would use to install that certificate on a site. There are calls to `get_site_host_name_bindings` and `delete_site_host_name_binding`, but no obvious API to *create* the binding; there are dozens of calls to `configure_...` and `create_or_update_...`, but neither the naming of the API endpoints nor the API documentation is in any way illuminating as to which calls should be used.
Can anyone point me in the right direction? What Python API calls do I need to make to upload a certificate obtained from a third party, and install that certificate on an AppService under a specific domain?
Addendum
========
Here's some sample code, based on suggestions from @peter-pan-msft:
```
creds = ServicePrincipalCredentials(
client_id=UUID('<client>'),
secret='<secret>',
tenant=UUID('<tenant>'),
resource='https://vault.azure.net'
)
kv = KeyVaultClient(
credentials=creds
)
KEY_VAULT_URI = 'https://<vault>.vault.azure.net/'
with open('example.pfx', 'rb') as f:
data = f.read()
# Try to get the certificates
for cert in kv.get_certificates(KEY_VAULT_URI):
print(cert)
# or...
kv.import_certificate(KEY_VAULT_URI, 'cert name', data, 'password')
```
This code raises:
```
KeyVaultErrorException: Operation returned an invalid status code 'Forbidden'
```
The values for the credentials have worked for other operations, including getting and creating keys in the key store. If I modify the credentials to be known bad values, I get:
```
KeyVaultErrorException: Operation returned an invalid status code 'Unauthorized'
``` | 2017/03/09 | [
"https://Stackoverflow.com/questions/42689852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218383/"
] | If you follow the [App Service walkthrough for importing certificates from Key Vault](https://learn.microsoft.com/azure/app-service/configure-ssl-certificate#import-a-certificate-from-key-vault), it'll tell you that your app needs read permissions to access certificates from the vault. But to initially import your certificate to Key Vault as you're doing, you'll need to grant your service principal certificate import permissions as well. Trying to import a certificate without import permissions will yield a "Forbidden" error like the one you're seeing.
There are also new packages for working with Key Vault in Python that replace `azure-keyvault`:
* [azure-keyvault-certificates](https://pypi.org/project/azure-keyvault-certificates/) [(Migration guide)](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/keyvault/azure-keyvault-certificates/migration_guide.md)
* [azure-keyvault-keys](https://pypi.org/project/azure-keyvault-keys/) [(Migration guide)](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/keyvault/azure-keyvault-keys/migration_guide.md)
* [azure-keyvault-secrets](https://pypi.org/project/azure-keyvault-secrets/) [(Migration guide)](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/keyvault/azure-keyvault-secrets/migration_guide.md)
[azure-identity](https://pypi.org/project/azure-identity/) is the package that should be used with these for authentication.
Here's an example of importing a certificate using `azure-keyvault-certificates`:
```
from azure.identity import DefaultAzureCredential
from azure.keyvault.certificates import CertificateClient
KEY_VAULT_URI = 'https://<vault>.vault.azure.net/'
credential = DefaultAzureCredential()
client = CertificateClient(KEY_VAULT_URI, credential)
with open('example.pfx', 'rb') as f:
data = f.read()
client.import_certificate("cert-name", data.encode(), password="password")
```
You can provide the same credentials that you used for `ServicePrincipalCredentials` by setting environment variables corresponding to the `client_id`, `secret`, and `tenant`:
```
export AZURE_CLIENT_ID="<client>"
export AZURE_CLIENT_SECRET="<secret>"
export AZURE_TENANT_ID="<tenant>"
```
(I work on the Azure SDK in Python) | According to your description, based on my understanding, I think you want to upload a certificate and use it on Azure App Service.
Per my experience for Azure Python SDK, there seems not to be any Python API for directly uploading a certificate to Azure App Service. However, there is a workaround way for doing it via import a certificate into Azure Key Vault and use it from Azure App Service. And for more details, please see the docuemtn list below.
1. The [`Import Certificate`](https://learn.microsoft.com/en-us/rest/api/keyvault/importcertificate) REST API of `Key Valut`. And the related Azure Python API is the method `import_certificate` from [here](https://github.com/Azure/azure-sdk-for-python/blob/61d49db3e3cc3d4821e823ce811f82b44a734b2a/azure-keyvault/azure/keyvault/key_vault_client.py#L173), you can refer to the [reference](http://azure-sdk-for-python.readthedocs.io/en/latest/sample_azure-keyvault.html) for key Vault to know how to use it.
2. There are two documents about using Key Vault certificate from Azue WebApp: [Use Azure Key Vault from a Web Application](https://learn.microsoft.com/en-us/azure/key-vault/key-vault-use-from-web-application) & [Deploying Azure Web App Certificate through Key Vault](https://blogs.msdn.microsoft.com/appserviceteam/2016/05/24/deploying-azure-web-app-certificate-through-key-vault/). The [`Create Or Update`](https://learn.microsoft.com/en-us/rest/api/appservice/certificates#Certificates_CreateOrUpdate) REST API of Certificates on Azure App Service is used for deploying, and the related Python API is [`create_or_update`](https://github.com/Azure/azure-sdk-for-python/blob/00678eb1cff3053077374dd527b6f564fd0fbb34/azure-mgmt-web/azure/mgmt/web/operations/certificates_operations.py#L236), for which usage, please refer to [here](http://azure-sdk-for-python.readthedocs.io/en/latest/resourcemanagementapps.html).
Hope it helps.
---
As Azure Python SDK reference of KeyVault for [`Access Policy`](http://azure-sdk-for-python.readthedocs.io/en/latest/sample_azure-keyvault.html#access-policies) said, as below.
>
> **Access policies**
>
>
> Some operations require the correct access policies for your credentials.
>
>
> If you get an “Unauthorized” error, please add the correct access policies to this credentials using the Azure Portal, the Azure CLI or the Key Vault Management SDK itself
>
>
>
Here is the steps for set access policies for certificates operations via [Azure CLI](https://learn.microsoft.com/en-us/azure/xplat-cli-install).
1. Get Azure AD service principals for your application, command `azure ad sp show --search <your-application-display-name>`, then copy the `Service Principal Names`(spn) like `xxxx-xxxx-xxxxx-xxxx-xxxx`.
2. Set policy for certificate operations, command `azure keyvault set-policy brucechen --spn <your-applicaiton-spn> --perms-to-certificates <perms-to-certificates, such as [\"all\"]>`. Explaination for The `<perms-to-certificates>` as below.
>
> JSON-encoded array of strings representing certificate operations; each string can be one of [all, get, list, delete, create, import, update, managecontacts, getissuers, listissuers, setissuers, deleteissuer
>
>
> | 184 |
63,482,435 | from the below table I want to pull records with ID 1 and ID 3.
```
ID Status assigned
1 low yes
1 High no
2 low no
3 high yes
3 low yes
```
Please let me know in python how can this be done. | 2020/08/19 | [
"https://Stackoverflow.com/questions/63482435",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10214628/"
] | You can target attribute lang on the blockquote tag and add direction rule:
```
blockquote[lang="ar"] {
direction: rtl;
}
```
```css
blockquote {
background-color: #f4f7fc;
font-size: 20px;
color: #191514;
line-height: 1.7;
position: relative;
padding: 50px 30px 30px 115px;
font-family: 'Poppins', sans-serif;
clear: both;
margin: 40px 0;
overflow: hidden;
}
blockquote[lang="ar"] {
direction: rtl;
}
blockquote p {
margin-bottom: 0 !important;
}
blockquote cite {
font-style: normal;
display: block;
color: #9b6f45;
font-weight: 700;
font-size: 16px;
margin-top: 11px;
}
blockquote:before {
content: '\f10d';
font-family: "FontAwesome";
color: #d5aa6d;
font-size: 28px;
position: absolute;
left: 22px;
top: 10px;
font-style: normal;
background-image: -webkit-gradient(linear, left top, left bottom, from(#d5aa6d), to(#9b6f45));
background-image: -webkit-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: -moz-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: -ms-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: -o-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: linear-gradient(top, #d5aa6d, #9b6f45);
filter: progid:DXImageTransform.Microsoft.gradient(startColorStr='#d5aa6d', endColorStr='#9b6f45');
background-color: transparent;
background-clip: text;
-moz-background-clip: text;
-webkit-background-clip: text;
text-fill-color: transparent;
-moz-text-fill-color: transparent;
-webkit-text-fill-color: transparent;
z-index: 2;
}
blockquote[lang="ar"]:before {
content: '\f10e';
right: 22px;
left: auto;
}
blockquote:after {
content: '\f10e';
font-family: "FontAwesome";
color: #d5aa6d;
font-size: 28px;
position: absolute;
right: 22px;
bottom: 10px;
font-style: normal;
background-image: -webkit-gradient(linear, left top, left bottom, from(#d5aa6d), to(#9b6f45));
background-image: -webkit-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: -moz-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: -ms-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: -o-linear-gradient(top, #d5aa6d, #9b6f45);
background-image: linear-gradient(top, #d5aa6d, #9b6f45);
filter: progid:DXImageTransform.Microsoft.gradient(startColorStr='#d5aa6d', endColorStr='#9b6f45');
background-color: transparent;
background-clip: text;
-moz-background-clip: text;
-webkit-background-clip: text;
text-fill-color: transparent;
-moz-text-fill-color: transparent;
-webkit-text-fill-color: transparent;
z-index: 2;
}
blockquote[lang="ar"]:after {
content: '\f10d';
right: auto;
left: 22px;
}
'''
'''
```
```html
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.14.0/css/all.min.css" integrity="sha512-1PKOgIY59xJ8Co8+NE6FZ+LOAZKjy+KY8iq0G4B3CyeY6wYHN3yt9PW0XpSriVlkMXe40PTKnXrLnZ9+fkDaog==" crossorigin="anonymous" />
<blockquote lang="en">
<ul>
<li>This is in english</li>
</ul>
</blockquote>
<blockquote lang="ar">
<ul>
<li>هذا باللغة العربية</li>
</ul>
</blockquote>
``` | add class to blockquote element, and set the class styling direction attribute to rtl | 185 |
73,581,339 | I want to show status every second in a very slow loop in python code, e.g.
```
for i in range(100):
sleep(1000000) # think there is a very slow job
# I want to show status in console every second
# to know if the job stop or not
```
The output image is, e.g.
```
$ python somejob.py
> 2022-09-02 13:04:10 | Status: running...
```
and the output updates every second, e.g.
```
$ python somejob.py
> 2022-09-02 13:04:11 | Status: running...
```
```
$ python somejob.py
> 2022-09-02 13:04:12 | Status: running...
```
```
$ python somejob.py
> 2022-09-02 13:04:13 | Status: running...
```
Any idea will by helpful. Thx!!! | 2022/09/02 | [
"https://Stackoverflow.com/questions/73581339",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6766052/"
] | I think what you're looking for is someting like the tqdm library: [github repo](https://github.com/tqdm/tqdm)
for example
```
from tqdm import tqdm
for i in tqdm(range(1000)):
continue # do something complex here
``` | You may us the [rich module](https://pypi.org/project/rich/) to disply a progress bar:
```
import time
from rich.progress import track
for i in track(range(100)):
time.sleep(0.5)
```
Here's a screenshot within the run:
[![enter image description here](https://i.stack.imgur.com/hrEhp.png)](https://i.stack.imgur.com/hrEhp.png) | 186 |
64,327,172 | I am running a django app with a postgreSQL database and I am trying to send a very large dictionary (consisting of time-series data) to the database.
My goal is to write my data into the DB as fast as possible. I am using the library requests to send the data via an API-call (built with django REST):
My API-view is simple:
```
@api_view(["POST"])
def CreateDummy(request):
for elem, ts in request.data['time_series'] :
TimeSeries.objects.create(data_json=ts)
msg = {"detail": "Created successfully"}
return Response(msg, status=status.HTTP_201_CREATED)
```
`request.data['time_series']` is a huge dictionary structured like this:
```
{Building1: {1:123, 2: 345, 4:567 .... 31536000: 2345}, .... Building30: {..... }}
```
That means I am having **30 keys with 30 values, whereas the values are each a dict with 31536000 elements.**
My API request looks like this (where data is my dictionary described above):
```
payload = {
"time_series": data,
}
requests.request(
"post", url=endpoint, json=payload
)
```
The code saves the time-series data to a jsonb-field in the backend. Now that works if I only loop over the first 4 elements of the dictionary. I can get that data in in about 1minute. But when I loop over the whole dict, my development server shuts down. I guess it's because the memory is insufficient. I get a `requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))`. Is the whole dict saved to memory before it starts iterating? I doubt it because I read that in python3 looping with `.items()` returns an iterator and is the preferred way to do this.
Is there a better way to deal with massive dicts in django/python? Should I loop through half of it and then through the other half? Or is there a faster way? Maybe using `pandas`? Or maybe sending the data differently? I guess I am looking for the most performant way to do this.
Happy to provide more code if needed.
Any help, hints or guides are very much appreciated! Thanks in advance
EDIT2: I think it is not my RAM usage or the size of the dict. I still have 5GiB of RAM left when the server shuts down. ~~And the size of the dict is 1176bytes~~ *Dict is much larger, see comments*
EDIT3: I can't even print the huge dict. It also shuts down then
EDIT4: When split the data up and send it not all at once the server can handle it. But when I try to query it back the server breaks again. It breaks on my production server (nginx AWS RDS setup) and it breaks on my local dev server. I am pretty sure it's because django can't handle queries that big with my current setup. But how could I solve this?
EDIT5: So what I am looking for is a two part solution. One for the creation of the data and one for the querying of the data. The creation of the data I described above. But even if I get all that data into the database, I will still have problems getting it out again.
I tried this by creating the data not all together but every time-series on its own. So let's assume I have this huge data in my DB and I try to query it back. All time-series objects belong to a network so I tried this like so:
```
class TimeSeriesByTypeAndCreationMethod(ListAPIView):
"""Query time-series in specific network."""
serializer_class = TimeSeriesSerializer
def get_queryset(self):
"""Query time-series
Query by name of network, type of data, creation method and
source.
"""
network = self.kwargs["name_network"]
if TimeSeries.objects.filter(
network_element__network__name=network,
).exists():
time_series = TimeSeries.objects.filter(
network_element__network__name=network,
)
return time_series
else:
raise NotFound()
```
But the query breaks the server like the data creation before. I think also this is too much data load. I thought I could use raw sql avoid breaking the server... Or is there also a better way?
EDIT6: Relevant models:
```
class TimeSeries(models.Model):
TYPE_DATA_CHOICES = [
....many choices...
]
CREATION_METHOD_CHOICES = [
....many choices...
]
description = models.CharField(
max_length=120,
null=True,
blank=True,
)
network_element = models.ForeignKey(
Building,
on_delete=models.CASCADE,
null=True,
blank=True,
)
type_data = models.CharField(
null=True,
blank=True,
max_length=30,
choices=TYPE_DATA_CHOICES,
)
creation_method = models.CharField(
null=True,
blank=True,
max_length=30,
choices=CREATION_METHOD_CHOICES,
)
source = models.CharField(
null=True,
blank=True,
max_length=300
)
data_json = JSONField(
help_text="Data for time series in JSON format. Valid JSON expected."
)
creation_date = models.DateTimeField(auto_now=True, null=True, blank=True)
def __str__(self):
return f"{self.creation_method}:{self.type_data}"
class Building(models.Model):
USAGE_CHOICES = [
...
]
name = models.CharField(
max_length=120,
null=True,
blank=True,
)
street = models.CharField(
max_length=120,
null=True,
blank=True,
)
house_number = models.CharField(
max_length=20,
null=True,
blank=True,
)
zip_code = models.CharField(
max_length=5,
null=True,
blank=True,
)
city = models.CharField(
max_length=120,
null=True,
blank=True,
)
usage = models.CharField(
max_length=120,
choices=USAGE_CHOICES,
null=True,
blank=True,
)
.....many more fields....
``` | 2020/10/13 | [
"https://Stackoverflow.com/questions/64327172",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9893391/"
] | You can solve your issues using two techniques.
Data Creation
-------------
Use bulk\_create to insert a large number of records, if SQL error happens due to large query size etc then provide the `batch_size` in `bulk_create`.
```
records = []
for elem, ts in request.data['time_series'] :
records.append(
TimeSeries(data_json=ts)
)
# setting batch size t 1000
TimeSeries.objects.bulk_create(records, batch_size=1000)
```
There're some caveats with bulk\_create like it will not generate signals and others see more in [Doc](https://docs.djangoproject.com/en/3.1/ref/models/querysets/#bulk-create)
Data Retrieval
--------------
Configure rest framework to use pagination **default configuration**
```
REST_FRAMEWORK = {
'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.LimitOffsetPagination',
'PAGE_SIZE': 100
}
```
For custom configuration use
```
class TimeSeriesResultsSetPagination(PageNumberPagination):
page_size = 50
page_size_query_param = 'page_size'
max_page_size = 10000
class BillingRecordsView(generics.ListAPIView):
serializer_class = TimeSeriesSerializer
pagination_class = TimeSeriesResultsSetPagination
def get_queryset(self):
"""Query time-series
Query by name of network, type of data, creation method and
source.
"""
network = self.kwargs["name_network"]
if TimeSeries.objects.filter(
network_element__network__name=network,
).exists():
time_series = TimeSeries.objects.filter(
network_element__network__name=network,
)
return time_series
else:
raise NotFound()
```
See other techniques for pagination at <https://www.django-rest-framework.org/api-guide/pagination/> | @micromegas when your solution is correct theoretically, however calling create() many times in a loop, I believe that causes the ConnectionError exception.
try to refactor to something like:
```
big_data_holder = []
for elem, ts in request.data['time_series'] :
big_data_holder.append(
TimeSeries(data_json=ts)
)
# examine the structure
print(big_data_holder)
TimeSeries.objects.bulk_create(big_data_holder)
```
please check for some downsides for this method
[Django Docs bulk\_create](https://docs.djangoproject.com/en/3.1/ref/models/querysets/#bulk-create) | 187 |
46,145,221 | what is different between `os.path.getsize(path)` and `os.stat`? which one is best to used in python 3? and when do we use them? and why we have two same solution?
I found [this](https://stackoverflow.com/questions/18962166/python-os-statfile-name-st-size-versus-os-path-getsizefile-name) answer but I couldn't understand what this quote means:
>
> From this, it seems pretty clear that there is no reason to expect the two approaches to behave differently (except perhaps due to the different structures of the loops in your code)
>
>
>
specifically why we have two approach and what is there different? | 2017/09/10 | [
"https://Stackoverflow.com/questions/46145221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4958447/"
] | `stat` is a POSIX system call (available on Linux, Unix and even Windows) which returns a bunch of information (size, type, protection bits...)
Python has to call it at some point to get the size ([and it does](https://stackoverflow.com/questions/18962166/python-os-statfile-name-st-size-versus-os-path-getsizefile-name)), but there's no system call to get *only* the size.
So they're the same performance-wise (maybe faster with `stat` but that's only 1 more function call so not I/O related). It's just that `os.path.getsize` is simpler to write.
that said, to be able to call `os.path.getsize` you have to make sure that the path is actually a *file*. When called on a directory, `getsize` returns some value (tested on Windows) which is probably related to the size of the node, so you have to use `os.path.isfile` first: another call to `os.stat`.
In the end, if you want to maximize performance, you have to use `os.stat`, check infos to see if path is a file, then use the `st_size` information. That way you're calling `stat` only once.
If you're using `os.walk` to scan the directory, you're exposed to more hidden `stat` calls, so look into `os.scandir` (Python 3.5).
Related:
* [Faster way to find large files with Python?](https://stackoverflow.com/questions/46144952/faster-way-to-find-large-files-with-python/46145070#46145070)
* [Python os.stat(file\_name).st\_size versus os.path.getsize(file\_name)](https://stackoverflow.com/questions/18962166/python-os-statfile-name-st-size-versus-os-path-getsizefile-name) looks like a duplicate but the question (and answer) is different | The answer you are linking to shows that the one calls the other:
```
def getsize(filename):
"""Return the size of a file, reported by os.stat()."""
return os.stat(filename).st_size
```
so fundamentally, both functions are using `os.stat`.
Why? probably because they had similar needs in two different packages, `path` and `stat`, and didn't want to duplicate code. | 188 |
68,856,582 | Is there a similar substituite to `.exit()` and `sys.exit()` that stops the program from running **but without terminating python entirely**?
Here's something similar to what I want to achieve:
```
import random
my_num = random.uniform(0, 1)
if my_num > 0.9:
# stop the code here
# some other huge blocks of codes
```
Here's why I think I need to find such a command/function:
1. I want the code to run automatically so definitely not "Ctrl+C"
2. I don't want python to terminate because I want to check other previously defined variables
3. I think `else` does not work well because there will be a huge amount of other codes after the condition check and there will be other .py to be running by `os.system()`
4. Of course, force triggering an error message like would do but is that the only way? | 2021/08/20 | [
"https://Stackoverflow.com/questions/68856582",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14610650/"
] | When you run your script, use the `-i` option. Then call `sys.exit()` where you want to stop.
```
python3 -i myscript.py
```
```py
if my_num > 0.9:
sys.exit()
```
Python won't actually exit when the `-i` used. It will instead place you in the REPL prompt.
---
The next best method, if you can't use the `-i` option, is to enter an emulated REPL provided by the `code` module.
```py
import sys
import code
import random
import readline
while True:
my_num = random.uniform(0, 1)
if my_num > 0.9:
console = code.InteractiveConsole(globals())
console.interact(banner="You are now in Python REPL. ^D exits.",
exitmsg="Bye!")
break
```
That will start a REPL that is not the built-in one, but one written in Python itself. | If you don't want to terminate the code, you can tell python to "sleep":
```
import random
import time
my_num = random.uniform(0, 1)
if my_num > 0.9:
time.sleep(50) #==== 50 seconds. Use any number.
``` | 189 |
15,661,841 | Is there any video tutorial or book from where I can learn python web programming in django platform in Eclipse(pydev).Please Help | 2013/03/27 | [
"https://Stackoverflow.com/questions/15661841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2183898/"
] | Try [The Django Book](http://www.djangobook.com/en/2.0/index.html), or start with the [tutorial](https://docs.djangoproject.com/en/1.5/intro/tutorial01/). | <http://pydev.org/manual_adv_django.html> should get you started. If you're new to eclipse, I would find a tutorial on that first as they have a lot of their own lingo. | 190 |
38,219,216 | I'm using `python` to crawl a webpage and save it. And the code works properly. But when I open the web page it just shows the website name i.e., **<http://www.indiabix.com>** and not the actual content.
You can just go the website and save one of it's pages **NOT** the homepage but other pages like **<http://www.indiabix.com/database/questions-and-answers/>**. And when you open it, the page just shows this
[![enter image description here](https://i.stack.imgur.com/iw4w7.png)](https://i.stack.imgur.com/iw4w7.png)
and not this
[![enter image description here](https://i.stack.imgur.com/xPspu.png)](https://i.stack.imgur.com/xPspu.png)
The code I've written is simple
```
def writeToFile(link, name, title):
response = urllib2.urlopen(link)
webContent = response.read()
f = open(name + '/' + title, 'w')
f.write(webContent)
f.close
```
You just pass the link, directory name and title of file.
I have checked in Chrome, Firefox and Safari and all show the same output. How can I resolve this issue to display the entire saved page fully.
Thank you. | 2016/07/06 | [
"https://Stackoverflow.com/questions/38219216",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3620992/"
] | in this version
```
implementation 'com.github.PhilJay:MPAndroidChart:v3.0.3'
```
try it
```
public class MainActivity extends AppCompatActivity {
private LineChart lc;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
initView();
initData();
}
public int ran() {
Random ran = new Random();
int i = ran.nextInt(199);
return i;
}
public int ran2() {
Random ran = new Random();
int i = ran.nextInt(49);
return i;
}
public void initData() {
lc.setExtraOffsets(12,50,24,0); //padding
setDescription("two lines example");
lc.animateXY(500, 0);
setLegend();
setYAxis();
setXAxis();
setChartData();
}
public void setLegend() {
Legend legend = lc.getLegend();
legend.setForm(Legend.LegendForm.LINE);
legend.setFormSize(20);
legend.setTextSize(20f);
legend.setFormLineWidth(1);
legend.setHorizontalAlignment(Legend.LegendHorizontalAlignment.CENTER);
legend.setTextColor(Color.BLACK);
}
public void setDescription(String descriptionStr) {
Description description = new Description();
description.setText(descriptionStr);
WindowManager wm = (WindowManager) getSystemService(Context.WINDOW_SERVICE);
DisplayMetrics outMetrics = new DisplayMetrics();
wm.getDefaultDisplay().getMetrics(outMetrics);
Paint paint = new Paint();
paint.setTextSize(20);
float x = outMetrics.widthPixels - Utils.convertDpToPixel(12);
float y = Utils.calcTextHeight(paint, descriptionStr) + Utils.convertDpToPixel(12);
description.setPosition(x, y);
lc.setDescription(description);
}
public void setYAxis() {
final YAxis yAxisLeft = lc.getAxisLeft();
yAxisLeft.setAxisMaximum(200);
yAxisLeft.setAxisMinimum(0);
yAxisLeft.setGranularity(10);
yAxisLeft.setTextSize(12f);
yAxisLeft.setTextColor(Color.BLACK);
yAxisLeft.setValueFormatter(new IAxisValueFormatter() {
@Override
public String getFormattedValue(float value, AxisBase axis) {
return value == yAxisLeft.getAxisMinimum() ? (int) value + "" : (int) value +"";
}
});
lc.getAxisRight().setEnabled(false);
}
public void setXAxis() {
XAxis xAxis = lc.getXAxis();
xAxis.setPosition(XAxis.XAxisPosition.BOTTOM);
xAxis.setDrawGridLines(false);
xAxis.setLabelCount(20);
xAxis.setTextColor(Color.BLACK);
xAxis.setTextSize(12f);
xAxis.setGranularity(1);
xAxis.setAxisMinimum(0);
xAxis.setAxisMaximum(100);
xAxis.setValueFormatter(new IAxisValueFormatter() {
@Override
public String getFormattedValue(float value, AxisBase axis) {
return value == 0 ? "example" : (int) value + "";
}
});
}
public void setChartData() {
List<Entry> yVals1 = new ArrayList<>();
for (int i = 0; i < 100; i++) {
int j = ran();
yVals1.add(new Entry(1 + i,j));
}
List<Entry> yVals2 = new ArrayList<>();
for (int i = 0; i < 100; i++) {
int j = ran2();
yVals2.add(new Entry(1 + i,j));
}
LineDataSet lineDataSet1 = new LineDataSet(yVals1, "ex1");
lineDataSet1.setValueTextSize(20);
lineDataSet1.setDrawCircleHole(true);
lineDataSet1.setColor(Color.MAGENTA);
lineDataSet1.setMode(LineDataSet.Mode.LINEAR);
lineDataSet1.setDrawCircles(true);
lineDataSet1.setCubicIntensity(0.15f);
lineDataSet1.setCircleColor(Color.MAGENTA);
lineDataSet1.setLineWidth(1);
LineDataSet lineDataSet2 = new LineDataSet(yVals2, "ex2");
lineDataSet2.setValueTextSize(20);
lineDataSet2.setDrawCircleHole(true);
lineDataSet2.setColor(Color.BLUE);
lineDataSet2.setMode(LineDataSet.Mode.LINEAR);
lineDataSet2.setDrawCircles(true);
lineDataSet2.setCubicIntensity(0.15f);
lineDataSet2.setCircleColor(Color.BLUE);
lineDataSet2.setLineWidth(1);
.
.
.
ArrayList<ILineDataSet> dataSets = new ArrayList<ILineDataSet>();
dataSets.add(lineDataSet1);
dataSets.add(lineDataSet2);
LineData lineData = new LineData(dataSets);
lc.setVisibleXRangeMaximum(5);
lc.setScaleXEnabled(true);
lc.setData(lineData);
}
```
and like this.
![Image](https://i.stack.imgur.com/P3ben.jpg) | Version 3.0 is initialized like so:
```
LineChart lineChart = new LineChart(context);
lineChart.setMinimumHeight(ToolBox.dpToPixels(context, 300));
lineChart.setMinimumWidth(ToolBox.getScreenWidth());
ArrayList<Entry> yVals = new ArrayList<>();
for(int i = 0; i < frigbot.getEquipment().getTemperatures().size(); i++)
{
Temperature temperature = frigbot.getEquipment().getTemperatures().get(i);
yVals.add(new Entry(
i, temperature.getValue().floatValue()
));
}
LineDataSet dataSet = new LineDataSet(yVals, "graph name");
dataSet.setMode(LineDataSet.Mode.CUBIC_BEZIER);
dataSet.setCubicIntensity(0.2f);
LineData data = new LineData(dataSet);
lineChart.setData(data);
```
It appears we can't specify custom horizontal labels, LineChart itself will automatically generate the horizontal and vertical axis labelling. | 192 |
71,153,492 | I'm having multiple errors while running this VGG training code (code and errors shown below). I don't know if its because of my dataset or is it something else.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.vgg16 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics.pairwise import cosine_similarity
import os
import scipy
train_directory = 'sign_data/train' #To be changed
test_directory = 'sign_data/test' #To be changed
train_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range = 0.1,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.1
)
train_generator = train_datagen.flow_from_directory(
train_directory,
target_size = (224, 224),
color_mode = 'rgb',
shuffle = True,
batch_size=32
)
test_datagen = ImageDataGenerator(
rescale = 1./255,
)
test_generator = test_datagen.flow_from_directory(
test_directory,
target_size = (224, 224),
color_mode = 'rgb',
shuffle = True,
batch_size=32
)
from tensorflow.keras.applications.vgg16 import VGG16
vgg_basemodel = VGG16(include_top=True)
from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5)
vgg_model = tf.keras.Sequential(vgg_basemodel.layers[:-1])
vgg_model.add(tf.keras.layers.Dense(10, activation = 'softmax'))
# Freezing original layers
for layer in vgg_model.layers[:-1]:
layer.trainable = False
vgg_model.compile(loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.SGD(momentum=0.9, learning_rate=0.001, decay=0.01),
metrics=['accuracy'])
history = vgg_model.fit(train_generator,
epochs=30,
batch_size=64,
validation_data=test_generator,
callbacks=[early_stopping])
# finetuning with all layers set trainable
for layer in vgg_model.layers:
layer.trainable = True
vgg_model.compile(loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.SGD(momentum=0.9, lr=0.0001),
metrics=['accuracy'])
history2 = vgg_model.fit(train_generator,
epochs=5,
batch_size=64,
validation_data=test_generator,
callbacks=[early_stopping])
vgg_model.save('saved_models/vgg_finetuned_model')
```
First error: Invalid Argument Error
```
InvalidArgumentError Traceback (most recent call last)
<ipython-input-13-292bf57ef59f> in <module>()
14 batch_size=64,
15 validation_data=test_generator,
---> 16 callbacks=[early_stopping])
17
18 # finetuning with all layers set trainable
/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
53 ctx.ensure_initialized()
54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
```
Second Error: Graph Execution Error
```
InvalidArgumentError: Graph execution error:
Detected at node 'categorical_crossentropy/softmax_cross_entropy_with_logits' defined at (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance
app.start()
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
self._run_once()
File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
handle._run()
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events
handler_func(fileobj, events)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 452, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 481, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 431, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-13-292bf57ef59f>", line 16, in <module>
callbacks=[early_stopping])
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1384, in fit
tmp_logs = self.train_function(iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 860, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 919, in compute_loss
y, y_pred, sample_weight, regularization_losses=self.losses)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 245, in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 1790, in categorical_crossentropy
y_true, y_pred, from_logits=from_logits, axis=axis)
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 5099, in categorical_crossentropy
labels=target, logits=output, axis=axis)
Node: 'categorical_crossentropy/softmax_cross_entropy_with_logits'
logits and labels must be broadcastable: logits_size=[32,10] labels_size=[32,128]
[[{{node categorical_crossentropy/softmax_cross_entropy_with_logits}}]] [Op:__inference_train_function_11227]
```
I'm running this on google colaboratory. Is there a module that I should install? Or is it purely an error on the code itself? | 2022/02/17 | [
"https://Stackoverflow.com/questions/71153492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15336528/"
] | I faced the same error and tried to test everything with no value, but I heard that you have to make the number of **folders** in the **dataset** the SAME as the one in `Dense`.
I don't know if this will solve your specific bug or not but try this with your code:
```
vgg_model.add(tf.keras.layers.Dense(10, activation = 'softmax'))
```
Replace `10` with the number of training dataset folders or can call 'classes'. | Check the image size. Size of image defined in model.add(.., input\_shape=(100,100,3)) should be same as the **target\_size=(100,100) in train\_gererator.**
And also check if number of neurons in last dense layer are equal to number of output classes or not.
By the way, there isn't any need to install any other module. It is some error in code. | 193 |
20,893,752 | I started trying to make a script to send emails using python, but nothing worked. I eventually got to the point where I just started copying and pasting email scripts and filling in my info. Still nothing worked. So i eventually just got rid of everything except this:
```
#!/usr/bin/python
import smtplib
```
This still did not work. Can someone explain to me why this doesn't work? I'm sure its really simple. I'm using mac os x 10.9 if that makes a difference. here is my error:
```
Traceback (most recent call last):
File "the_email.py", line 2, in <module>
import smtplib
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/smtplib.py", line 46, in <module>
import email.utils
ImportError: No module named utils
``` | 2014/01/02 | [
"https://Stackoverflow.com/questions/20893752",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2402862/"
] | Change the name of your script from `email.py` to something else. It is interfering with the Python standard library module of the same name, `email`. | Read this: [Syntax: python smtplib not working in script](https://stackoverflow.com/questions/14102113/syntax-python-smtplib-not-working-in-script)
A user says that you have to remove email.py from the folder. | 196 |
28,262,400 | I am changing the original post to memory leak, as what i have observed that cassandra python driver do not release sessions from memory. And during heavy inserts its eat up all the memory (Thus crashes cassandra as not enough room left for GC).
This was raised earlier but i see the issue in latest drivers as well.
<https://github.com/datastax/python-driver/pull/131>
```
In [2]: cassandra.__version__
Out[2]: '2.1.4'
class SimpleClient(object):
session = None
def connect(self, nodes):
cluster = Cluster(nodes)
metadata = cluster.metadata
self.session = cluster.connect()
logging.info('Connected to cluster: ' + metadata.cluster_name)
for host in metadata.all_hosts():
logging.info('Datacenter: %s; Host: %s; Rack: %s', host.datacenter, host.address, host.rack)
print ("Datacenter: %s; Host: %s; Rack: %s"%(host.datacenter, host.address, host.rack))
def close(self):
self.session.cluster.shutdown()
logging.info('Connection closed.')
def main():
logging.basicConfig()
client = SimpleClient()
client.connect(['127.0.0.1'])
client.close()
if __name__ == "__main__":
count = 0
while count != 1:
main()
time.sleep(1)
```
If any one have found the solution of it please share. | 2015/02/01 | [
"https://Stackoverflow.com/questions/28262400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4460263/"
] | Calling `id.Hex()` will return a string representation of the `bson.ObjectId`.
This is also the default behavior if you try to marshal one `bson.ObjectId` to json string. | Things like to work [playground](https://play.golang.org/p/1LG1NlFEK-)
Just define dot `.` for your template
```
{{ .Name }} {{ .Food }}
<a href="/remove/{{ .Id }}">Remove me</a>
``` | 197 |
33,426,483 | I have created my Rails app on OpenShift. It uses Python and a package installed from PIP. How do I upgrade to a newer Python version (currently it is 2.6) ?
Visible cartridges:
```
user@debian:~$ rhc cartridges
jbossas-7 JBoss Application Server 7 web
jboss-dv-6.1.0 (!) JBoss Data Virtualization 6 web
jbosseap-6 (*) JBoss Enterprise Application Platform 6 web
jboss-unified-push-1 (!) JBoss Unified Push Server 1.0.0.Beta1 web
jboss-unified-push-2 (!) JBoss Unified Push Server 1.0.0.Beta2 web
jenkins-1 Jenkins Server web
nodejs-0.10 Node.js 0.10 web
perl-5.10 Perl 5.10 web
php-5.3 PHP 5.3 web
php-5.4 PHP 5.4 web
zend-6.1 PHP 5.4 with Zend Server 6.1 web
python-2.6 Python 2.6 web
python-2.7 Python 2.7 web
python-3.3 Python 3.3 web
ruby-1.8 Ruby 1.8 web
ruby-1.9 Ruby 1.9 web
ruby-2.0 Ruby 2.0 web
jbossews-1.0 Tomcat 6 (JBoss EWS 1.0) web
jbossews-2.0 Tomcat 7 (JBoss EWS 2.0) web
jboss-vertx-2.1 (!) Vert.x 2.1 web
jboss-wildfly-8 (!) WildFly Application Server 8.2.1.Final web
jboss-wildfly-9 (!) WildFly Application Server 9 web
diy-0.1 Do-It-Yourself 0.1 web
cron-1.4 Cron 1.4 addon
jenkins-client-1 Jenkins Client addon
mongodb-2.4 MongoDB 2.4 addon
mysql-5.1 MySQL 5.1 addon
mysql-5.5 MySQL 5.5 addon
phpmyadmin-4 phpMyAdmin 4.0 addon
postgresql-8.4 PostgreSQL 8.4 addon
postgresql-9.2 PostgreSQL 9.2 addon
rockmongo-1.1 RockMongo 1.1 addon
switchyard-0 SwitchYard 0.8.0 addon
haproxy-1.4 Web Load Balancer addon
Note: Web cartridges can only be added to new applications.
(*) denotes a cartridge with additional usage costs.
(!) denotes a cartridge that will not receive automatic security updates.
```
And then trying to install a newer Python ...
```
user@debian:~$ rhc add-cartridge --app myappname python-3.3
Short Name Full name
========== =========
cron-1.4 Cron 1.4
jenkins-client-1 Jenkins Client
mongodb-2.4 MongoDB 2.4
mysql-5.1 MySQL 5.1
mysql-5.5 MySQL 5.5
phpmyadmin-4 phpMyAdmin 4.0
postgresql-8.4 PostgreSQL 8.4
postgresql-9.2 PostgreSQL 9.2
rockmongo-1.1 RockMongo 1.1
switchyard-0 SwitchYard 0.8.0
haproxy-1.4 Web Load Balancer
There are no cartridges that match 'python-3.3'.
```
If it's possible to install a newer version of Python, how do I install PIP? | 2015/10/29 | [
"https://Stackoverflow.com/questions/33426483",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1906809/"
] | If you have installed phpMyAdmin in your linux server (centos/RHEL/debian), and tried to access phpMyAdmin in most cases you will get this 403 forbidden error. I have seen this issue very often if you are installing phpmyadmin using yum or by apt-get. By default phpmyadmin installed path is **/usr/share/phpmyadmin** and the apache configuration file is located in /**etc/httpd/conf.d/phpmyadmin.conf**.
Forbidden
You don't have permission to access **/phpmyadmin/** on this server.
To fix:
```
nano /etc/httpd/conf.d/phpmyadmin.conf
```
Remove or comment the first two lines in below.
>
>
> ```
> #Order Allow,Deny
> #Deny from all
>
> ```
>
>
```
Allow from 127.0.0.1
```
Restart the apache server.
```
service httpd restart
``` | I was running into the same issue with a new install of Fedora 25, Apache, MariaDB and PHP.
The router is on 192.168.1.1 and the Fedora 25 server is sitting at 192.168.1.100 which is a staic address handed out by the router. The laptop was getting a random ip in the range of 192.168.1.101 to 150.
The change I made to the /etc/httpd/conf.d/phpMyAdmin.conf was instances of
```
Require ip 127.0.0.1
```
to
```
Require ip 127.0.0.1 192.168.1.1/24
```
This worked for me. The idea came from the process of inserting the ip address of the laptop into the .conf file behind the reference to 127.0.0.1 and I was able to get access.
So instead of doing the more secure thing of handing out a static ip address to the laptop I let the phpMyAdmin.conf file open to a range of ip address on the local subnet, if that is the right terminology.
If there are drawbacks to doing this let me know so that I can make the appropriate changes. | 200 |
19,223,676 | I'm using passenger with apache to run my ruby application. I've noticed that passenger crashes from time to time (apache is still working), and I need to manually restart apache to make it work again.
A look at the log makes me think it occurs when apache changes the log file (archives the current an create a new one). This is what a `tail -F` on tha apache errors log file looks like:
```
tail: ‘/var/log/apache2/error.log’ has become inaccessible: No such file or directory
tail: ‘/var/log/apache2/error.log’ has appeared; following end of new file
[ 2013-10-06 05:05:27.2678 10498/7f3f0cf82740 agents/Watchdog/Main.cpp:459 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby1.9.1', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/var/lib/gems/1.9.1/gems/passenger-4.0.14', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '18659', 'web_server_type' => 'apache', 'web_server_worker_gid' => '1000', 'web_server_worker_uid' => '1001' }
[Sun Oct 06 05:05:27 2013] [error] *** Passenger could not be initialized because of this error: Unable to start the Phusion Passenger watchdog because it encountered the following error during startup: Tried to reuse existing server instance directory /tmp/passenger.1.0.18659, but it has wrong permissions
[Sun Oct 06 05:05:27 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 Phusion_Passenger/4.0.14 configured -- resuming normal operations
```
* The message mentions a file in `/tmp` with wrong permissisons, why are they wrong? what should they be? how to make them right?
* The last message "*resuming normal operations*" seems wrong too since passenger is down. Is it a bug? What does it mean?
* What should I do to prevent this from happening? | 2013/10/07 | [
"https://Stackoverflow.com/questions/19223676",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/149237/"
] | Ah, I see that you are on version 4.0.14. Please upgrade to the latest version, which is 4.0.20. Versions prior to 4.0.17 or so didn't properly support /tmp directories with the setgid flag. | In my case, restart apache solve this problem.
```
$ /etc/init.d/httpd stop
$ /etc/init.d/httpd start
``` | 201 |
4,658,008 | I have a rather long setup, then three questions at the end.
On OS X, the System Python framework contains three executables (let me give them short names):
```
> F=/System/Library/Frameworks/Python.framework/Versions/2.6
> A=$F/bin/python2.6
> B=$F/Resources/Python.app/Contents/MacOS/Python
> C=$F/Python
```
$A and $B are clearly too small to be Python itself.
```
> ls -s $A; ls -s $B; ls -s $C
16 /System/Library/Frameworks/Python.framework/Versions/2.6/bin/python2.6
16 /System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/MacOS/Python
3152 /System/Library/Frameworks/Python.framework/Versions/2.6/Python
> $A
Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)
> $B
Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)
> $C
-bash: /System/Library/Frameworks/Python.framework/Versions/2.6/Python: cannot execute binary file
```
Despite equal size and apparently, effect, the first two are different, e.g.:
```
> cmp -lc $A $B
```
Also, in /usr/bin, python2.6 is a symlink to $C, but there is also:
```
> D=/usr/bin/python
> ls -s $D
48 /usr/bin/python
```
I want to sort out how these are connected; the command `which` doesn't help.
```
> export DYLD_PRINT_LIBRARIES=1
> $A
..
dyld: loaded: /System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/MacOS/Python
dyld: loaded: /System/Library/Frameworks/Python.framework/Versions/2.6/Python
```
Summary: $A loads $B followed by $C; $B loads $C; $D loads $B followed by $C
So my questions are:
1. Is this documented anywhere?
2. What roles do these play?
3. Most important, what tools would be useful in tracing connections like this? | 2011/01/11 | [
"https://Stackoverflow.com/questions/4658008",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/215679/"
] | The Apple-supplied Pythons in OS X 10.6 are built and installed using the standard Python *framework* build option, with a few customization tweaks. It is not in Apple's documentation because the specific layout is not an Apple invention; it has evolved over the years by the Python project using other OS X framework layouts as a starting point. If you install a version of Python on OS X using one of the python.org installers, say from [here](http://www.python.org/download/releases/2.6.6/), you will see the same pattern, with the framework rooted at `/Library/Frameworks/` rather than `/System/Library/Frameworks`. So, if you are really curious, you can download the source and look at the `configure` script and `Makefile` templates. It can be heavy reading, though. Apple also makes available [here](http://www.opensource.apple.com/) the source used to build open source components, including Python, in each OS X release along with the customization patches so, in theory, you can see exactly how Apple built what it released.
That said, to address your questions, in Python 2.6:
`$A` is the pythonw wrapper that ensures Python is recognized as a GUI application by OS X (see the source of `pythonw.c` [here](http://svn.python.org/view/python/branches/release26-maint/Mac/Tools/pythonw.c?view=markup)). Note, the Apple version of pythonw has been customized to add the preferred execution modes (see Apple's `man 1 python`). A somewhat different approach to this is provided in the upstream source of newer versions of Python (2.7 and 3.2).
`$B` is the actual executable of the Python interpreter. It is what is `exec`ed by the pythonw executable, `$A`. You should be able to easily verify that by actually running Python and looking at the value of `sys.executable` but there is a bug with the Apple-supplied Python 2.6 (probably due to the added feature mentioned above) that causes the wrong value to be assigned to it. The python.org Python 2.6.6 shows the correct value:
```
$ cd /Library/Frameworks/Python.framework/Versions/2.6
$ ./bin/python2.6 -c 'import sys;print(sys.executable)'
/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/MacOS/Python`
```
`$C` is the shared library containing all of the interpreter's loadable modules. You can see that by using `otool` on `$B`:
```
$ cd /System/Library/Frameworks/Python.framework/Versions/2.6
$ cd Resources/Python.app/Contents/MacOS/
$ otool -L ./Python
Python:
/System/Library/Frameworks/Python.framework/Versions/2.6/Python (compatibility version 2.6.0, current version 2.6.1)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0)
``` | The tools to use are ls and file.
ls -l will give what the symbolic link goes to. The size of a symbolic link is the number of chafracters in the path it points to.
file x will give the type of the file
e.g.
```
file /System/Library/Frameworks/Python.framework/Versions/2.6/Python
/System/Library/Frameworks/Python.framework/Versions/2.6/Python: Mach-O universal binary with 3 architectures
/System/Library/Frameworks/Python.framework/Versions/2.6/Python (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64
/System/Library/Frameworks/Python.framework/Versions/2.6/Python (for architecture i386): Mach-O dynamically linked shared library i386
/System/Library/Frameworks/Python.framework/Versions/2.6/Python (for architecture ppc7400): Mach-O dynamically linked shared library ppc
```
OSX Frameworks are described in [Apple developer docs](http://developer.apple.com/library/mac/#documentation/MacOSX/Conceptual/BPFrameworks/Frameworks.html%23//apple_ref/doc/uid/10000183i)
/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/MacOS/Python and
/System/Library/Frameworks/Python.framework/Versions/2.6/bin/python2.6 are the actual python interpreter, I think they are hard links to the same binary.
/usr/bin/python is the python on the path - I think it is hard linked to /usr/bin/pythonw. These are wrappers that call exec to the real python interpreter in /System/Library/Frameworks/Python.framework/Versions/2.6/bin/python2.6 see [python bug tracker](http://bugs.python.org/issue6834)
/System/Library/Frameworks/Python.framework/Versions/Current is a symlink to System/Library/Frameworks/Python.framework/Versions/2.6 using the standard OSX Framework versioning
/System/Library/Frameworks/Python.framework/Versions/2.6/Python is the shared library that does all the work - set up as a library so that you can write programs in other languages that can embed a python interpreter.
For other details look at [Python docs](http://docs.python.org/using/mac.html) but I suspect you would have to search the [apple python mailing list](http://www.python.org/community/sigs/current/pythonmac-sig/) | 202 |
49,577,050 | I am trying to interact with a database stored in back4app using python. After sending my GET request, I get "{'message': 'Not Found', 'error': {}}". My python code is as follows:
```
import json, http.client, urllib.parse
# create a connection to the server
url = "parseapi.back4app.com"
connection = http.client.HTTPSConnection(url)
connection.connect()
# define parameter for GET request
params = urllib.parse.urlencode({"where":json.dumps({"Name": "Dru Love"})})
# perform GET request
connection.request('GET', '/parse/classes/PGA?%s' % params, '', {
"X-Parse-Application-Id": "app_id",
"X-Parse-REST-API-Key": "api_key"
})
# store response in result variable
result = json.loads(connection.getresponse().read())
connection.close()
print(result)
```
Response:
```
{'message': 'Not Found', 'error': {}}
``` | 2018/03/30 | [
"https://Stackoverflow.com/questions/49577050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5983936/"
] | You CPP file doesn't include the .h file and it doesn't have `extern "C"` declarations of its own. So, the methods are compiled with C++ signatures, so they cannot be found by the JVM, which expects `extern "C"` signatures as per the .h file.
The easy fix is to include the .h file. | Solution!!!! I fixed it by doing some research and through trial and error i figured out that my imports were messing up the DLL
Cpp file:
```
/* Replace "dll.h" with the name of your header */
#include "IGNORE.h"
#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
JNIEXPORT jint JNICALL Java_NativeRandom_next__I
(JNIEnv *env, jclass clazz, jint i){
srand(time(NULL));
int n = (rand()%i)+1;
return n;
}
JNIEXPORT jint JNICALL Java_NativeRandom_next__II
(JNIEnv *env, jclass clazz, jint seed, jint i){
srand(seed);
int n =(rand()%i)+1;
return n;
}
```
Header file:
```
/* DO NOT EDIT THIS FILE - it is machine generated */
#include <jni.h>
/* Header for class NativeRandom */
#ifndef _Included_NativeRandom
#define _Included_NativeRandom
#ifdef __cplusplus
extern "C" {
#endif
/*
* Class: NativeRandom
* Method: next
* Signature: (I)I
*/
JNIEXPORT jint JNICALL Java_NativeRandom_next__I
(JNIEnv *, jclass, jint);
/*
* Class: NativeRandom
* Method: next
* Signature: (II)I
*/
JNIEXPORT jint JNICALL Java_NativeRandom_next__II
(JNIEnv *, jclass, jint, jint);
#ifdef __cplusplus
}
#endif
#endif
``` | 203 |
65,573,140 | I was making a virtual assistant in python, but I see the following error.
```
ImportError: No system module 'pywintypes' (pywintypes39.dll)
```
I am using Windows 10 and Python 3.9
Here is the code
```
import speech_recognition as sr
import pyttsx3
listner=sr.Recognizer()
engine=pyttsx3.init()
engine.say('Hello Vishal. I am Cisco')
engine.say('What do you want me to do?')
engine.runAndWait()
try:
with sr.Microphone() as source:
print('listening...')
voice=listner.listen(source)
command = listner.recognize_google(voice)
command=command.lower()
if "cisco" in command:
print(command)
except:
print('Something went wrong')
```
Also when I run this program The console prints this:
```
enter code hraceback (most recent call last):
File "C:\Users\visha\AppData\Roaming\Python\Python39\site-packages\pyttsx3\__init__.py", line 20, in init
eng = _activeEngines[driverName]
File "C:\Program Files (x86)\Python\lib\weakref.py", line 134, in __getitem__
o = self.data[key]()
KeyError: None
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\visha\Documents\Python\Basic.py", line 4, in <module>
engine=pyttsx3.init()
File "C:\Users\visha\AppData\Roaming\Python\Python39\site-packages\pyttsx3\__init__.py", line 22, in init
eng = Engine(driverName, debug)
File "C:\Users\visha\AppData\Roaming\Python\Python39\site-packages\pyttsx3\engine.py", line 30, in __init__
self.proxy = driver.DriverProxy(weakref.proxy(self), driverName, debug)
File "C:\Users\visha\AppData\Roaming\Python\Python39\site-packages\pyttsx3\driver.py", line 50, in __init__
self._module = importlib.import_module(name)
File "C:\Program Files (x86)\Python\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "C:\Users\visha\AppData\Roaming\Python\Python39\site-packages\pyttsx3\drivers\sapi5.py", line 10, in <module>
import pythoncom
File "C:\Users\visha\AppData\Roaming\Python\Python39\site-packages\pythoncom.py", line 2, in <module>
import pywintypes
File "C:\Users\visha\AppData\Roaming\Python\Python39\site-packages\win32\lib\pywintypes.py", line 105, in <module>
__import_pywin32_system_module__("pywintypes", globals())
File "C:\Users\visha\AppData\Roaming\Python\Python39\site-packages\win32\lib\pywintypes.py", line 87, in __import_pywin32_system_module__
raise ImportError("No system module '%s' (%s)" % (modname, filename))
ImportError: No system module 'pywintypes' (pywintypes39.dll)
PS C:\Users\visha\Documents\Python> ere
```
I am a beginner so I don't have much idea.
Thanks in advance for your help | 2021/01/05 | [
"https://Stackoverflow.com/questions/65573140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14856292/"
] | Try importing win32api at the top,
```
import win32api
import speech_recognition as sr
import pyttsx3
``` | I did the following when I experienced the same problem:
When we look at these lines in the error,
```
File "C:\Users\visha\AppData\Roaming\Python\Python39\site-packages\win32\lib\pywintypes.py", line 87, in __import_pywin32_system_module__
raise ImportError("No system module '%s' (%s)" % (modname, filename))
ImportError: No system module 'pywintypes' (pywintypes39.dll)
```
The `pywintypes.py` is searching for the `(pywintypes39.dll)` files in the `C:\Users\visha\AppData\Roaming\Python\Python39\site-packages\win32\lib` directory hence I copied the two files 'pythoncom39.dll' and 'pywintypes39.dll' present in the 'pywin32\_system32' folder, to the `C:\Users\visha\AppData\Roaming\Python\Python39\site-packages\win32\lib` directory. It solved the problem for me. | 204 |
73,678,506 | I want to get and parse the python (python2) version. This way (which works):
```
python2 -V 2>&1 | sed 's/.* \([0-9]\).\([0-9]\).*/\1\2/'
```
For some reason, python2 is showing the version using the -V argument on its error output. Because this is doing nothing:
```
python2 -V | sed 's/.* \([0-9]\).\([0-9]\).*/\1\2/'
```
So it needs to be redirected `2>&1` to get parsed (stderr to stdout). Ok, but I'd like to avoid the error shown if a user launching this command has no python2 installed. The desired output on screen for a user who not have python2 installed is nothing. How can I do that? because I need the error output shown to parse the version.
I already did a solution doing before a conditional `if` statement using the hash command to know if the python2 command is present or not... so I have a working workaround which avoids the possibility of launching the python2 command if it is not present... but just curiosity. Forget about python2. Let's suppose is any other command which is redirecting stderr to stdout. Is there a possibility (bash trick) to parse its output without showing it if there is an error?
Any idea? | 2022/09/11 | [
"https://Stackoverflow.com/questions/73678506",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5300329/"
] | Print output only if the line starts with `Python 2`:
```
python2 -V 2>&1 | sed -n 's/^Python 2\.\([0-9]*\).*/2\1/p'
```
or,
```
command -v python2 >/dev/null && python2 -V 2>&1 | sed ...
``` | Include the next line in your script
```
command python2 >/dev/null 2>&1 || {echo "python2 not installed or in PATH"; exit 1; }
```
EDITED: Changed `which` into `command` | 214 |
28,434,920 | I've been following the djangogirl's tutorial here <http://tutorial.djangogirls.org/en/deploy/README.html> on deploying a django app on Heroku. I am a complete newbie at this so a lot of the stuff just seems like black magic to me, and I have a very fuzzy idea of what is going on. However, I seem to have been able to get everything going smoothly from creating my app to pushing it onto a remote repository and running the web process.
```
>heroku create
>git push heroku master
>heroku ps:scale web=1
```
When I open the url of the app after this, I get a 'requested url not found on this server' error page, which the tutorial says is expected since I have not filled up the empty database. So it says to run
```
>heroku run python manage.py migrate
>heroku run python manage.py createsuperuser
```
When I ran them both commands seemed to execute fine. I tried running `manage.py migrate` again just to be sure but it simply said that there were no more migrations to apply. I can log in fine into the admin page, but trying to open the app url itself still gives me a 'requested url was not found on this server' error page, even after I applied migrations.
Like I said, I'm a real newbie, so I'm at a lost as to how I should troubleshoot this. I've been following the tutorial step-by-step and have no clue where I've went wrong. Help is much appreciated.
**EDIT:**
Here is the output from `heroku info -s`:
```
addons=heroku-postgresql:hobby-dev
archived_at=
buildpack_provided_description=Python
create_status=complete
created_at=2015/02/10 05:02:00 -0800
domain_name=aqiblog.herokuapp.com
dynos=1
git_url=https://git.heroku.com/aqiblog.git
id=33873439
name=aqiblog
owner_delinquent=false
owner_email=***
owner_name=***
region=us
released_at=2015/02/10 05:33:50 -0800
repo_migrate_status=complete
repo_size=9458
requested_stack=
slug_size=53579300
stack=cedar-14
updated_at=2015/02/10 06:04:30 -0800
web_url=https://aqiblog.herokuapp.com/
workers=0
```
Here is the output from `heroku logs`. To my untrained eye nothing seems out of the ordinary:
```
2015-02-10T14:04:49.315067+00:00 heroku[router]: at=info method=GET path="/" hos
t=aqiblog.herokuapp.com request_id=15e74a24-32e9-4a74-ab2f-696c63853b72 fwd="183
.90.125.206" dyno=web.1 connect=1ms service=4ms status=404 bytes=267
2015-02-10T14:06:59.516339+00:00 heroku[router]: at=info method=GET path="/" hos
t=aqiblog.herokuapp.com request_id=12b8b705-4820-46c4-9195-a04740ad138a fwd="183
.90.125.206" dyno=web.1 connect=6ms service=7ms status=404 bytes=267
2015-02-10T14:20:16.661117+00:00 heroku[router]: at=info method=GET path="/" hos
t=aqiblog.herokuapp.com request_id=599be313-368a-4f66-900f-a459ef42b9ff fwd="183
.90.125.206" dyno=web.1 connect=1ms service=5ms status=404 bytes=267
2015-02-10T14:30:13.477370+00:00 heroku[router]: at=info method=GET path="/admin
/" host=aqiblog.herokuapp.com request_id=91f04bfd-e7b9-41a4-b905-4b7d26861cc5 fw
d="183.90.125.206" dyno=web.1 connect=2ms service=43ms status=302 bytes=391
2015-02-10T14:30:13.800060+00:00 heroku[router]: at=info method=GET path="/admin
/login/?next=/admin/" host=aqiblog.herokuapp.com request_id=301b7aee-0f48-432f-a
9a9-6696b14fe6be fwd="183.90.125.206" dyno=web.1 connect=1ms service=55ms status
=200 bytes=2368
2015-02-10T14:30:14.193976+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/base.css" host=aqiblog.herokuapp.com request_id=d04f60ed-eef6-4678-a
3f0-e133b55df175 fwd="183.90.125.206" dyno=web.1 connect=1ms service=3ms status=
304 bytes=136
2015-02-10T14:30:14.463950+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/login.css" host=aqiblog.herokuapp.com request_id=35ba7b7f-8ca2-40e5-
b274-5a33138a7478 fwd="183.90.125.206" dyno=web.1 connect=3ms service=6ms status
=304 bytes=136
2015-02-10T14:30:14.845040+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/nav-bg.gif" host=aqiblog.herokuapp.com request_id=6b9ba7e5-b520-46d2
-ae47-776340e6eac4 fwd="183.90.125.206" dyno=web.1 connect=3ms service=4ms statu
s=304 bytes=136
2015-02-10T14:30:31.563636+00:00 heroku[router]: at=info method=POST path="/admi
n/login/?next=/admin/" host=aqiblog.herokuapp.com request_id=8cb9b343-f93d-4f14-
9803-592614b17b2f fwd="183.90.125.206" dyno=web.1 connect=2ms service=132ms stat
us=302 bytes=625
2015-02-10T14:30:31.910829+00:00 heroku[router]: at=info method=GET path="/admin
/" host=aqiblog.herokuapp.com request_id=12df9c3c-4b60-4446-9094-ef567912c19f fw
d="183.90.125.206" dyno=web.1 connect=1ms service=68ms status=200 bytes=3837
2015-02-10T14:30:32.260761+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/dashboard.css" host=aqiblog.herokuapp.com request_id=194ab120-f85f-4
78f-bd05-2bd5b424177d fwd="183.90.125.206" dyno=web.1 connect=2ms service=4ms st
atus=304 bytes=136
2015-02-10T14:30:32.558809+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/icon_addlink.gif" host=aqiblog.herokuapp.com request_id=b1867188-e93
5-42eb-a12e-3342d6b09fd2 fwd="183.90.125.206" dyno=web.1 connect=2ms service=3ms
status=304 bytes=136
2015-02-10T14:30:32.556672+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/default-bg.gif" host=aqiblog.herokuapp.com request_id=c9cd25cf-5bd9-
49ca-acb8-78c14d1cdffd fwd="183.90.125.206" dyno=web.1 connect=2ms service=4ms s
tatus=304 bytes=136
2015-02-10T14:30:32.958179+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/icon_changelink.gif" host=aqiblog.herokuapp.com request_id=46672886-
026d-4e44-bbda-0a46c738688f fwd="183.90.125.206" dyno=web.1 connect=3ms service=
4ms status=304 bytes=136
2015-02-10T14:30:37.016120+00:00 heroku[router]: at=info method=GET path="/admin
/blog/post/" host=aqiblog.herokuapp.com request_id=024f5a16-2afa-46ae-901f-d9c95
d7ffac8 fwd="183.90.125.206" dyno=web.1 connect=2ms service=89ms status=200 byte
s=3466
2015-02-10T14:30:37.392657+00:00 heroku[router]: at=info method=GET path="/admin
/jsi18n/" host=aqiblog.herokuapp.com request_id=175746a8-6269-499c-9539-e94442df
3740 fwd="183.90.125.206" dyno=web.1 connect=3ms service=45ms status=200 bytes=2
551
2015-02-10T14:30:37.353918+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/changelists.css" host=aqiblog.herokuapp.com request_id=4b1968b8-5977
-49d2-92c7-7f8e6f83bcce fwd="183.90.125.206" dyno=web.1 connect=2ms service=5ms
status=200 bytes=5523
2015-02-10T14:30:37.353954+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/js/core.js" host=aqiblog.herokuapp.com request_id=2765ef40-cf94-4e80-af2
3-df2bd41c29fd fwd="183.90.125.206" dyno=web.1 connect=1ms service=3ms status=20
0 bytes=7182
2015-02-10T14:30:37.664695+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/js/admin/RelatedObjectLookups.js" host=aqiblog.herokuapp.com request_id=
ab90b0a5-c743-4a36-8cf8-a45bcdb26c10 fwd="183.90.125.206" dyno=web.1 connect=2ms
service=3ms status=200 bytes=3515
2015-02-10T14:30:37.676047+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/js/jquery.min.js" host=aqiblog.herokuapp.com request_id=d793e5e6-f05c-41
7a-bc14-c726b217a873 fwd="183.90.125.206" dyno=web.1 connect=2ms service=11ms st
atus=200 bytes=92913
2015-02-10T14:30:37.672663+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/js/jquery.init.js" host=aqiblog.herokuapp.com request_id=7494223b-6a43-4
861-93c6-8a902e8f0b1f fwd="183.90.125.206" dyno=web.1 connect=3ms service=3ms st
atus=200 bytes=608
2015-02-10T14:30:37.922666+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/js/actions.min.js" host=aqiblog.herokuapp.com request_id=4481557e-e808-4
5c7-9948-2856409b9f9c fwd="183.90.125.206" dyno=web.1 connect=3ms service=4ms st
atus=200 bytes=3320
2015-02-10T14:30:39.773435+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/tooltag-add.png" host=aqiblog.herokuapp.com request_id=d1e06eca-e54e
-44a4-b4c6-b834751c52b8 fwd="183.90.125.206" dyno=web.1 connect=3ms service=3ms
status=200 bytes=371
2015-02-10T14:30:39.795367+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/nav-bg-reverse.gif" host=aqiblog.herokuapp.com request_id=7df78871-e
7a8-4a47-9260-7b1f715b2293 fwd="183.90.125.206" dyno=web.1 connect=2ms service=2
ms status=304 bytes=136
2015-02-10T14:30:43.715783+00:00 heroku[router]: at=info method=GET path="/admin
/blog/" host=aqiblog.herokuapp.com request_id=c8c26fb6-7c06-424a-96a3-ed7d7acf52
15 fwd="183.90.125.206" dyno=web.1 connect=2ms service=62ms status=200 bytes=260
6
2015-02-10T14:30:44.051139+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/base.css" host=aqiblog.herokuapp.com request_id=a9525efc-ca5b-40a2-b
79c-0d1c7ea8f323 fwd="183.90.125.206" dyno=web.1 connect=3ms service=2ms status=
304 bytes=136
2015-02-10T14:30:46.623963+00:00 heroku[router]: at=info method=GET path="/admin
/" host=aqiblog.herokuapp.com request_id=6e2f037e-1156-4ecb-ab9a-c629746701bc fw
d="183.90.125.206" dyno=web.1 connect=3ms service=67ms status=200 bytes=3837
2015-02-10T14:33:37.383948+00:00 heroku[router]: at=info method=GET path="/admin
/auth/user/" host=aqiblog.herokuapp.com request_id=2e7aca9b-bf1a-4bca-a64e-4958c
50d2bc3 fwd="183.90.125.206" dyno=web.1 connect=1ms service=107ms status=200 byt
es=6954
2015-02-10T14:33:37.795450+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/base.css" host=aqiblog.herokuapp.com request_id=427f5971-4d88-4876-b
d34-3d61219d0ee0 fwd="183.90.125.206" dyno=web.1 connect=1ms service=3ms status=
304 bytes=136
2015-02-10T14:33:38.115038+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/changelists.css" host=aqiblog.herokuapp.com request_id=1011a5b1-f3e3
-4ccd-bb79-decbb9f04bab fwd="183.90.125.206" dyno=web.1 connect=0ms service=1ms
status=304 bytes=136
2015-02-10T14:33:38.144837+00:00 heroku[router]: at=info method=GET path="/admin
/jsi18n/" host=aqiblog.herokuapp.com request_id=713139f7-188f-4dbb-8c06-f187256f
7f69 fwd="183.90.125.206" dyno=web.1 connect=1ms service=45ms status=200 bytes=2
551
2015-02-10T14:33:38.404063+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/js/jquery.init.js" host=aqiblog.herokuapp.com request_id=5135b182-dc2e-4
a74-b8a2-5d6c0ab12db4 fwd="183.90.125.206" dyno=web.1 connect=0ms service=1ms st
atus=304 bytes=136
2015-02-10T14:33:38.400444+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/js/core.js" host=aqiblog.herokuapp.com request_id=de86c5af-1a29-430f-a65
1-dc6cf1fd48bf fwd="183.90.125.206" dyno=web.1 connect=0ms service=1ms status=30
4 bytes=136
2015-02-10T14:33:38.415586+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/js/jquery.min.js" host=aqiblog.herokuapp.com request_id=19c89d3d-5d23-41
09-979e-b8bb3f42199a fwd="183.90.125.206" dyno=web.1 connect=4ms service=2ms sta
tus=304 bytes=136
2015-02-10T14:33:38.408607+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/icon_searchbox.png" host=aqiblog.herokuapp.com request_id=c5a70735-1
712-4040-9eb5-6b7dee13abd2 fwd="183.90.125.206" dyno=web.1 connect=0ms service=1
ms status=200 bytes=620
2015-02-10T14:33:38.406036+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/js/admin/RelatedObjectLookups.js" host=aqiblog.herokuapp.com request_id=
893d6f5c-af8b-4532-8aa3-6934250950c4 fwd="183.90.125.206" dyno=web.1 connect=0ms
service=2ms status=304 bytes=136
2015-02-10T14:33:38.412860+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/js/actions.min.js" host=aqiblog.herokuapp.com request_id=2dcac6b1-bb61-4
ccd-b2ec-25ecacf72228 fwd="183.90.125.206" dyno=web.1 connect=1ms service=4ms st
atus=304 bytes=136
2015-02-10T14:33:38.772975+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/changelist-bg.gif" host=aqiblog.herokuapp.com request_id=4e6c781c-37
35-43f8-9e00-6b553737b3a6 fwd="183.90.125.206" dyno=web.1 connect=0ms service=1m
s status=200 bytes=301
2015-02-10T14:33:38.674202+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/icon-yes.gif" host=aqiblog.herokuapp.com request_id=e9851dfa-c05f-49
09-a4c4-367907fb285c fwd="183.90.125.206" dyno=web.1 connect=0ms service=1ms sta
tus=200 bytes=551
2015-02-10T14:33:38.777309+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/nav-bg.gif" host=aqiblog.herokuapp.com request_id=e1f74192-01f4-4036
-9a9d-79526979186a fwd="183.90.125.206" dyno=web.1 connect=0ms service=2ms statu
s=304 bytes=136
2015-02-10T14:33:38.790954+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/nav-bg-selected.gif" host=aqiblog.herokuapp.com request_id=1fa8c87c-
67f7-473d-b783-bdff4120a7ce fwd="183.90.125.206" dyno=web.1 connect=2ms service=
2ms status=200 bytes=517
2015-02-10T14:33:38.784201+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/sorting-icons.gif" host=aqiblog.herokuapp.com request_id=c712ebb1-65
e6-45ab-96b5-7a993386ce77 fwd="183.90.125.206" dyno=web.1 connect=0ms service=2m
s status=200 bytes=621
2015-02-10T14:33:43.504537+00:00 heroku[router]: at=info method=GET path="/admin
/" host=aqiblog.herokuapp.com request_id=4de8523a-32a3-4038-baca-1da937d8bcc2 fw
d="183.90.125.206" dyno=web.1 connect=1ms service=63ms status=200 bytes=3837
2015-02-10T14:33:43.831076+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/dashboard.css" host=aqiblog.herokuapp.com request_id=e0bb91fd-3202-4
003-adb3-c56c968cace9 fwd="183.90.125.206" dyno=web.1 connect=0ms service=1ms st
atus=304 bytes=136
2015-02-10T14:33:47.175588+00:00 heroku[router]: at=info method=GET path="/admin
/logout/" host=aqiblog.herokuapp.com request_id=9262fa61-6ca5-471d-b640-267022e8
51a7 fwd="183.90.125.206" dyno=web.1 connect=1ms service=76ms status=200 bytes=1
695
2015-02-10T14:33:54.172429+00:00 heroku[router]: at=info method=GET path="/" hos
t=aqiblog.herokuapp.com request_id=8cf14925-254e-464e-9989-97663a447176 fwd="183
.90.125.206" dyno=web.1 connect=0ms service=5ms status=404 bytes=267
2015-02-10T14:57:48.546469+00:00 heroku[api]: Starting process with command `pyt
hon manage.py migrate` by aquietimmanence@gmail.com
2015-02-10T14:57:54.603274+00:00 heroku[run.5781]: Awaiting client
2015-02-10T14:57:54.648369+00:00 heroku[run.5781]: Starting process with command
`python manage.py migrate`
2015-02-10T14:57:54.901926+00:00 heroku[run.5781]: State changed from starting t
o up
2015-02-10T14:57:57.659853+00:00 heroku[run.5781]: State changed from up to comp
lete
2015-02-10T14:57:57.650205+00:00 heroku[run.5781]: Process exited with status 0
2015-02-10T15:35:07.305786+00:00 heroku[web.1]: Stopping all processes with SIGT
ERM
2015-02-10T15:35:08.786505+00:00 heroku[web.1]: Process exited with status 143
2015-02-10T15:35:05.251964+00:00 heroku[web.1]: Idling
2015-02-10T15:35:05.253030+00:00 heroku[web.1]: State changed from up to down
2015-02-10T15:48:14.796557+00:00 heroku[web.1]: Unidling
2015-02-10T15:48:20.976179+00:00 app[web.1]: serving on http://0.0.0.0:22300
2015-02-10T15:48:14.800364+00:00 heroku[web.1]: State changed from down to start
ing
2015-02-10T15:48:19.187658+00:00 heroku[web.1]: Starting process with command `w
aitress-serve --port=22300 mysite.wsgi:application`
2015-02-10T15:48:21.486158+00:00 heroku[web.1]: State changed from starting to u
p
2015-02-10T15:48:31.912164+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/base.css" host=aqiblog.herokuapp.com request_id=8db52772-ce7d-4c5d-a
b49-07cbe01ebaf7 fwd="183.90.125.206" dyno=web.1 connect=1ms service=2ms status=
200 bytes=14265
2015-02-10T15:48:22.898514+00:00 heroku[router]: at=info method=GET path="/" hos
t=aqiblog.herokuapp.com request_id=df78cf63-b28a-4986-947b-82b7229b6c22 fwd="183
.90.125.206" dyno=web.1 connect=1ms service=22ms status=404 bytes=267
2015-02-10T15:48:32.904690+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/login.css" host=aqiblog.herokuapp.com request_id=f01a7d1e-d3da-4ce2-
a596-49c502c148a9 fwd="183.90.125.206" dyno=web.1 connect=6ms service=6ms status
=200 bytes=1208
2015-02-10T15:48:31.526354+00:00 heroku[router]: at=info method=GET path="/admin
/login/?next=/admin/" host=aqiblog.herokuapp.com request_id=28cd2f09-49fe-4327-a
b6c-4fadd62b32db fwd="183.90.125.206" dyno=web.1 connect=2ms service=32ms status
=200 bytes=2368
2015-02-10T15:48:33.516041+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/nav-bg.gif" host=aqiblog.herokuapp.com request_id=ad4a834a-814a-4a95
-926f-9356f0ee8939 fwd="183.90.125.206" dyno=web.1 connect=1ms service=3ms statu
s=200 bytes=517
2015-02-10T15:48:31.127574+00:00 heroku[router]: at=info method=GET path="/admin
/" host=aqiblog.herokuapp.com request_id=1a5c7e5a-e4b7-4a9d-a141-df959928fe81 fw
d="183.90.125.206" dyno=web.1 connect=1ms service=16ms status=302 bytes=390
2015-02-10T15:49:01.840359+00:00 heroku[router]: at=info method=POST path="/admi
n/login/?next=/admin/" host=aqiblog.herokuapp.com request_id=4287d109-c80e-4e43-
acc1-b047f63af3bc fwd="183.90.125.206" dyno=web.1 connect=2ms service=140ms stat
us=200 bytes=2540
2015-02-10T15:49:02.187422+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/icon_error.gif" host=aqiblog.herokuapp.com request_id=feb59db0-4c11-
495b-80f2-a5923fc73c6f fwd="183.90.125.206" dyno=web.1 connect=2ms service=3ms s
tatus=200 bytes=571
2015-02-10T15:49:21.789377+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/nav-bg-reverse.gif" host=aqiblog.herokuapp.com request_id=36deb932-f
4b7-433b-99dc-5926493a7c89 fwd="183.90.125.206" dyno=web.1 connect=2ms service=3
ms status=200 bytes=430
2015-02-10T15:49:22.319427+00:00 heroku[router]: at=info method=POST path="/admi
n/login/?next=/admin/" host=aqiblog.herokuapp.com request_id=5fdd060f-1513-4816-
a6eb-203c999af731 fwd="183.90.125.206" dyno=web.1 connect=3ms service=134ms stat
us=200 bytes=2540
2015-02-10T15:49:56.826400+00:00 heroku[router]: at=info method=POST path="/admi
n/login/?next=/admin/" host=aqiblog.herokuapp.com request_id=f3d1e006-6e9c-436b-
bf1a-c4d4f05532d4 fwd="183.90.125.206" dyno=web.1 connect=10ms service=187ms sta
tus=200 bytes=2540
2015-0
2-10T15:49:57.955395+00:00 heroku[router]: at=info method=GET path="/static/admi
n/css/login.css" host=aqiblog.herokuapp.com request_id=a918b235-0698-4ce4-80d0-a
96d8dfc0610 fwd="183.90.125.206" dyno=web.1 connect=1ms service=6ms status=304 b
ytes=136
2015-02-10T15:49:59.906654+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/base.css" host=aqiblog.herokuapp.com request_id=15773995-4c99-459a-9
029-ee456371f894 fwd="183.90.125.206" dyno=web.1 connect=0ms service=1ms status=
304 bytes=136
2015-02-10T15:50:01.261517+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/icon_error.gif" host=aqiblog.herokuapp.com request_id=3b313279-e294-
42c0-8aa5-f41debcb637e fwd="183.90.125.206" dyno=web.1 connect=4ms service=5ms s
tatus=304 bytes=136
2015-02-10T15:50:05.242912+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/nav-bg.gif" host=aqiblog.herokuapp.com request_id=6a0162e3-ec3b-445c
-b8df-82d5ac6febb2 fwd="183.90.125.206" dyno=web.1 connect=1ms service=2ms statu
s=304 bytes=136
2015-02-10T15:57:25.175840+00:00 heroku[router]: at=info method=GET path="/" hos
t=aqiblog.herokuapp.com request_id=0d15d2cb-81e6-4482-bb62-b6182a994ad7 fwd="183
.90.125.206" dyno=web.1 connect=1ms service=5ms status=404 bytes=267
2015-02-10T15:59:35.740276+00:00 heroku[router]: at=info method=GET path="/" hos
t=aqiblog.herokuapp.com request_id=7b4581c2-94d7-4835-a46c-632690169036 fwd="180
.255.248.125" dyno=web.1 connect=1ms service=5ms status=404 bytes=267
2015-02-10T15:59:36.759656+00:00 heroku[router]: at=info method=GET path="/favic
on.ico" host=aqiblog.herokuapp.com request_id=2f3cc9e1-c086-4725-b9ab-1ec6973c41
16 fwd="180.255.248.125" dyno=web.1 connect=1ms service=5ms status=404 bytes=278
2015-02-10T17:00:33.167613+00:00 heroku[web.1]: Idling
2015-02-10T17:00:33.168670+00:00 heroku[web.1]: State changed from up to down
2015-02-10T17:00:36.185466+00:00 heroku[web.1]: Stopping all processes with SIGT
ERM
2015-02-10T17:00:37.952348+00:00 heroku[web.1]: Process exited with status 143
2015-02-10T17:34:28.848841+00:00 heroku[web.1]: Unidling
2015-02-10T17:34:28.849165+00:00 heroku[web.1]: State changed from down to start
ing
2015-02-10T17:34:36.106173+00:00 heroku[web.1]: Starting process with command `w
aitress-serve --port=31126 mysite.wsgi:application`
2015-02-10T17:34:38.613422+00:00 app[web.1]: serving on http://0.0.0.0:31126
2015-02-10T17:34:39.197559+00:00 heroku[web.1]: State changed from starting to u
p
2015-02-10T17:34:40.974208+00:00 heroku[router]: at=info method=GET path="/admin
" host=aqiblog.herokuapp.com request_id=42a56ba0-c220-4c9c-916f-0eede44bfee9 fwd
="79.199.237.241" dyno=web.1 connect=3ms service=22ms status=301 bytes=257
2015-02-10T17:34:41.125316+00:00 heroku[router]: at=info method=GET path="/admin
/" host=aqiblog.herokuapp.com request_id=3388a9c4-f5ee-40a5-9ac8-a05d7568854e fw
d="79.199.237.241" dyno=web.1 connect=3ms service=20ms status=302 bytes=390
2015-02-10T17:34:41.293835+00:00 heroku[router]: at=info method=GET path="/admin
/login/?next=/admin/" host=aqiblog.herokuapp.com request_id=3ae35f86-8510-4cf3-a
0e5-f2a0ce41248b fwd="79.199.237.241" dyno=web.1 connect=2ms service=43ms status
=200 bytes=2368
2015-02-10T17:34:41.985024+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/base.css" host=aqiblog.herokuapp.com request_id=19153867-5c26-41ca-a
8aa-d62e8094a02b fwd="79.199.237.241" dyno=web.1 connect=3ms service=5ms status=
200 bytes=14265
2015-02-10T17:34:42.139780+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/css/login.css" host=aqiblog.herokuapp.com request_id=7b798b7b-7850-43b4-
927b-2343eacc5582 fwd="79.199.237.241" dyno=web.1 connect=5ms service=5ms status
=200 bytes=1208
2015-02-10T17:34:43.024133+00:00 heroku[router]: at=info method=GET path="/stati
c/admin/img/nav-bg.gif" host=aqiblog.herokuapp.com request_id=20fa5217-01eb-4615
-bd4f-b53aaee0421c fwd="79.199.237.241" dyno=web.1 connect=3ms service=5ms statu
s=200 bytes=517
2015-02-10T17:34:44.289787+00:00 heroku[router]: at=info method=GET path="/favic
on.ico" host=aqiblog.herokuapp.com request_id=4bfc9795-d61c-402c-afd5-08bfe0564b
38 fwd="79.199.237.241" dyno=web.1 connect=4ms service=11ms status=404 bytes=278
2015-02-10T17:34:44.446558+00:00 heroku[router]: at=info method=GET path="/favic
on.ico" host=aqiblog.herokuapp.com request_id=131a1b9d-cac0-4410-ae02-0ab562e6b2
2b fwd="79.199.237.241" dyno=web.1 connect=2ms service=7ms status=404 bytes=278
2015-02-10T17:35:08.338710+00:00 heroku[router]: at=info method=GET path="/" hos
t=aqiblog.herokuapp.com request_id=4fd6af83-32f4-4ba2-8117-1e0914ac8882 fwd="79.
199.237.241" dyno=web.1 connect=4ms service=9ms status=404 bytes=267
2015-02-10T18:41:59.962983+00:00 heroku[web.1]: Idling
2015-02-10T18:41:59.963483+00:00 heroku[web.1]: State changed from up to down
2015-02-10T18:42:04.360383+00:00 heroku[web.1]: Stopping all processes with SIGT
ERM
2015-02-10T18:42:06.591219+00:00 heroku[web.1]: Process exited with status 143
```
Also, I'm working on Windows 8, and I'm still new to working with the commandline, so if this requires any command line troubleshooting I'd appreciate greatly if you could give me Windows commands (though I'd gladly search up Windows equivalents if you give me Unix commands) | 2015/02/10 | [
"https://Stackoverflow.com/questions/28434920",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | If a method does not return anything, then that method must have some side-effect such as changing a property of the class. Test that side-effect, e.g. test the value of said property. | In the strictest sense, if you ware testing only the ReadCities method, your mock shouldn't be testing that engine.ReadFile actually did something (you would have another unit test for that). You should isolate this method by mocking the call to engine.ReadFile (which I think you've done, but I'm not completely familiar with Moq).
In any case, what you'll likely need to do is create a public accessor for \_geoDbCities that you can check in your Assert. | 215 |
73,880,813 | I'm a beginner into python language. I want to develop an android app. I've wrote some code and few days ago I wanted to see how my app looks on mobile before continue.
I've tried all methods to convert .py to .apk but failed. I've tried with google colab, I've installed a VM... but nothing worked. If I use google colab, after all I receive an .apk, but when I install it on my phone, doesn't work... The app opens, but closes imediatly.
If I use VM I receive this error: [error message](https://i.stack.imgur.com/hIYxo.png)
This is a picture of all my components: [components](https://i.stack.imgur.com/uCYJl.png)
For google colab I'm using this commands :
!pip install buildozer
!pip install cython==0.29.19
!sudo apt-get install -y
python3-pip
build-essential
git
python3
python3-dev
ffmpeg
libsdl2-dev
libsdl2-image-dev
libsdl2-mixer-dev
libsdl2-ttf-dev
libportmidi-dev
libswscale-dev
libavformat-dev
libavcodec-dev
zlib1g-dev
!sudo apt-get install -y
libgstreamer1.0
gstreamer1.0-plugins-base
gstreamer1.0-plugins-good
!sudo apt-get install build-essential libsqlite3-dev sqlite3 bzip2 libbz2-dev zlib1g-dev libssl-dev openssl libgdbm-dev libgdbm-compat-dev liblzma-dev libreadline-dev libncursesw5-dev libffi-dev uuid-dev libffi6
!sudo apt-get install libffi-dev
!buildozer init
!buildozer -v android debug
!buildozer android clean
This is a picture with my google colab: [google colab & buildozer.spec](https://i.stack.imgur.com/67S5k.png)
I've tried all tutorials I've found on internet but nothing worked.
The code works perfectly on PC!
Please, help me! | 2022/09/28 | [
"https://Stackoverflow.com/questions/73880813",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20109886/"
] | Create an interface that describes the data you want to store in the context:
```
interface AuthContextType {
currentUser: IUser;
login: (email: string, password: string) => ......,
signup: (email: string, password: string) => ....,
logout: () => void,
recoverPassword: (email: string) => ....,
update: (data: any) => ....
}
```
Create an object that describes the initial state:
```
const initialState = {
currentUser: null,
login: (email: string, ....) => console.error('No AuthProvider supplied. Wrap this component with a AuthProvider to use this functionality.'),
...
};
```
Then create the context:
```
const AuthContext = createContext<AuthContextType>(initialState);
``` | You can either type `createContext` with `YourInterface | null` as in
```js
const AuthContext = createContext<YourInterface|null>(null);
```
or type cast an empty object as in
```js
const AuthContext = createContext({} as YourInterface)
``` | 218 |
60,715,443 | I've create a pretty standard linked list in python with a Node class and LinkedList class. I've also added in methods for LinkedList as follows:
1. add(newNode): Adds an element to the linked list
2. addBefore(valueToFind, newNode): Adds a new node before an element with the value specified.
3. printClean: Prints the linked list
I'm trying to use the addBefore method to perform an insertion, however it will not work if the insertion isn't on the head. I'm not sure why.
```
class Node:
def __init__(self, dataval =None):
self.dataval = dataval
self.nextval = None
class LinkedList:
def __init__(self, headval =None):
self.headval = headval
def add(self, newNode):
# The linked list is empty
if(self.headval is None):
self.headval = newNode
else:
# Add to the end of the linked list
currentNode = self.headval
while currentNode is not None:
# Found the last element
if(currentNode.nextval is None):
currentNode.nextval = newNode
break
else:
currentNode = currentNode.nextval
def addBefore(self, valueToFind, newNode):
currentNode = self.headval
previousNode = None
while currentNode is not None:
# We found the element we will insert before
if (currentNode.dataval == valueToFind):
# Set our new node's next value to the current element
newNode.nextval = currentNode
# If we are inserting at the head position
if (previousNode is None):
self.headval = newNode
else:
# Change previous node's next to our new node
previousNode.nexval = newNode
return 0
# Update loop variables
previousNode = currentNode
currentNode = currentNode.nextval
return -1
def printClean(self):
currentNode = self.headval
while currentNode is not None:
print(currentNode.dataval, end='')
if(currentNode.nextval != None):
print("->", end='')
currentNode = currentNode.nextval
else:
return
testLinkedList = LinkedList()
testLinkedList.add(Node("Monday"))
testLinkedList.add(Node("Wednesday"))
testLinkedList.addBefore("Wednesday", Node("Tuesday"))
testLinkedList.printClean()
```
>
> Monday->Wednesday
>
>
> | 2020/03/17 | [
"https://Stackoverflow.com/questions/60715443",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1607450/"
] | Each line to list, then `map()`, `join()` with `\n` would be fine
```
this.setState({ body: value.blocks.map(x => x.text).join("\n") });
```
```
import React from "react";
import Body from "./Body";
class App extends React.Component {
constructor(props) {
super(props);
this.state = {
body: ""
};
}
changeBodyHandler = value => {
this.setState({ body: value.blocks.map(x => x.text).join("\n") });
};
render() {
console.log(this.state.body);
return (
<div>
<Body
label="Body"
name="body"
value={this.state.body}
onChange={this.changeBodyHandler}
/>
</div>
);
}
}
export default App;
```
---
[![enter image description here](https://i.stack.imgur.com/bFa3H.jpg)](https://i.stack.imgur.com/bFa3H.jpg)
Try it online:
[![Edit awesome-hamilton-l3h7k](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/awesome-hamilton-l3h7k?fontsize=14&hidenavigation=1&theme=dark) | * If you want with break line like as it is in editor, add `<p>` tag while concatination.
```
changeBodyHandler = value => {
let data =value.block;
let text = "";
data.map(index => {
text = text +"<p>" +index.text+"</p>";
});
this.setState({
body: text
});
};
```
* And if you want to display the data in same way somewher use `dangerouslySetInnerHTML`
```
<div dangerouslySetInnerHTML={{__html: this.state.body}} />
``` | 219 |
67,168,199 | I'm trying to build an executable from a simple python script using pyvisa-py but I'm running into error after I run the executable generated by pyinstaller.
Here what my small python code looks like
```
import pyvisa as visa
import tkinter as tk
root = tk.Tk()
root.title("SCPI test")
canvas1 = tk.Canvas(root, width=200, height=100, bg='lightsteelblue2', relief='raised')
canvas1.pack()
def test_1():
rm = visa.ResourceManager("@py")
res_list = rm.list_resources()
print('res_list :', res_list)
len_list = len(res_list)
print('len_list :', len_list)
if not len_list:
print("No equipment found.")
try:
inst = rm.open_resource('USB0::0x1AB1::0x0588::DS1K00005888::INSTR')
print(inst.query("*IDN?"))
except ValueError:
print("No device found.")
Launch_prgm = tk.Button(text="Device detect", command=test_1, bg='green', fg='white', font=('helvetica', 12, 'bold'))
canvas1.create_window(100, 50, window=Launch_prgm)
root.mainloop()
```
When I run this code in Pycharm or by directly running the .py file from a terminal outside Pycharm it is working well. But when I build the executable using pyinstaller I got the following error.
```
Exception in Tkinter callback
Traceback (most recent call last):
File "pyvisa/highlevel.py", line 2833, in get_wrapper_class
File "importlib/__init__.py", line 126, in import_module
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'pyvisa_py'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "pyvisa/highlevel.py", line 2838, in get_wrapper_class
File "importlib/__init__.py", line 126, in import_module
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'pyvisa-py'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tkinter/__init__.py", line 1705, in __call__
File "test_ea_psu.py", line 17, in test_1
File "pyvisa/highlevel.py", line 3015, in __new__
File "pyvisa/highlevel.py", line 2926, in open_visa_library
File "pyvisa/highlevel.py", line 2849, in get_wrapper_class
ValueError: Wrapper not found: No package named pyvisa_py
```
Apparently I'm not the only one having problems with pyvisa and pyinstaller. Many people on github had this issue as well.
<https://github.com/pyvisa/pyvisa-py/issues/216>
I'm using python 3.6 on Zorin OS (an ubuntu like OS).
When I do python3 -m visa info I got the following info showing
```
python3 -m visa info
/usr/local/lib/python3.6/dist-packages/visa.py:23: FutureWarning: The visa module provided by PyVISA is being deprecated. You can replace `import visa` by `import pyvisa as visa` to achieve the same effect.
The reason for the deprecation is the possible conflict with the visa package provided by the https://github.com/visa-sdk/visa-python which can result in hard to debug situations.
FutureWarning,
Machine Details:
Platform ID: Linux-5.4.0-67-generic-x86_64-with-Zorin-15-bionic
Processor: x86_64
Python:
Implementation: CPython
Executable: /usr/bin/python3
Version: 3.6.9
Compiler: GCC 8.4.0
Bits: 64bit
Build: Jan 26 2021 15:33:00 (#default)
Unicode: UCS4
PyVISA Version: 1.11.3
Backends:
ivi:
Version: 1.11.3 (bundled with PyVISA)
Binary library: Not found
py:
Version: 0.5.2
ASRL INSTR: Available via PySerial (3.5)
USB INSTR: Available via PyUSB (1.1.1). Backend: libusb1
USB RAW: Available via PyUSB (1.1.1). Backend: libusb1
TCPIP INSTR: Available
TCPIP SOCKET: Available
GPIB INSTR:
Please install linux-gpib (Linux) or gpib-ctypes (Windows, Linux) to use this resource type. Note that installing gpib-ctypes will give you access to a broader range of funcionality.
No module named 'gpib'
```
Also I installed pyvisa and pyvisa-py using pycharm package installer builtin function (work the same as pip3)
If I do list pip3 I got
```
Package Version
------------------------- ----------------------
altgraph 0.17
apturl 0.5.2
asn1crypto 0.24.0
Brlapi 0.6.6
certifi 2018.1.18
chardet 3.0.4
chrome-gnome-shell 0.0.0
colorama 0.4.4
command-not-found 0.3
configparser 5.0.2
crayons 0.4.0
cryptography 2.1.4
cupshelpers 1.0
cycler 0.10.0
dataclasses 0.8
defer 1.0.6
defusedxml 0.7.1
distro-info 0.18ubuntu0.18.04.1
ea-psu-controller 1.1.0
et-xmlfile 1.0.1
httplib2 0.9.2
idna 2.6
importlib-metadata 4.0.0
iso8601 0.1.14
keyring 10.6.0
keyrings.alt 3.0
language-selector 0.1
launchpadlib 1.10.6
lazr.restfulclient 0.13.5
lazr.uri 1.0.3
louis 3.5.0
lxml 4.6.2
m3u8 0.8.0
macaroonbakery 1.1.3
Mako 1.0.7
MarkupSafe 1.0
matplotlib 2.1.1
netifaces 0.10.4
numpy 1.13.3
oauth 1.0.1
olefile 0.45.1
openpyxl 3.0.7
pexpect 4.2.1
Pillow 5.1.0
pip 21.0.1
power 1.4
protobuf 3.0.0
psutil 5.4.2
pycairo 1.16.2
pycrypto 2.6.1
pycups 1.9.73
pygobject 3.26.1
pyinstaller 4.3
pyinstaller-hooks-contrib 2021.1
pymacaroons 0.13.0
PyNaCl 1.1.2
pyparsing 2.2.0
pyRFC3339 1.0
pyserial 3.5
python-apt 1.6.5-ubuntu0.5-zorin1
python-dateutil 2.6.1
python-debian 0.1.32
pytz 2018.3
pyusb 1.1.1
PyVISA 1.11.3
PyVISA-py 0.5.2
pyxdg 0.25
PyYAML 3.12
reportlab 3.4.0
requests 2.18.4
requests-unixsocket 0.1.5
SecretStorage 2.3.1
selenium 3.141.0
setuptools 56.0.0
simplejson 3.13.2
six 1.11.0
system-service 0.3
typing-extensions 3.7.4.3
ubuntu-drivers-common 0.0.0
ufw 0.36
urllib3 1.22
wadllib 1.3.2
webdriver-manager 3.3.0
wheel 0.30.0
xkit 0.0.0
zipp 3.4.1
zope.interface 4.3.2
zorin-appearance 3.0
zorin-connect 1.0
zorin-exec-guard 1.0
```
I have experience using pyinstaller with other python code I have written in the past but I'm a beginner with pyvisa. I spent my whole last night trying to figure out what was the problem but I coudn't so that's why I'm asking here for help. Sorry if the same question have been posted before I searched everywhere a solution but didn't find any. | 2021/04/19 | [
"https://Stackoverflow.com/questions/67168199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13849963/"
] | You can always add missing site-package(s) in your list of hidden imports in your `.spec` file. Specifically for missing '`pyvisa_py`' module you can write following `test.spec` file:
```
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(['test.py'],
pathex=['/home/user/test/source'],
binaries=[],
datas=[('/path/to/python/site-packages/pyvisa_py','pyvisa_py')],
hiddenimports=['pyvisa_py'],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
[],
exclude_binaries=True,
name='test',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True )
coll = COLLECT(exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='test')
```
Note: you can get your '`/path/to/python/site-packages/`' using command `python -m site` | Recently I was searching for similar issue with another library. What I understood is that in such cases,
1. Make sure that the packages are installed via pip.
2. If it still has a problem, try to copy the entire library folder from *"<python\_env\_path>/lib/site-packages/"* to the *"dist"* folder created by pyinstaller.
3. In rare case, just see if renaming the import to match the folder name.
Ref: <https://github.com/pm4py/pm4py-ws/blob/master/WINDOWS_COMPILING.txt>
Note: python\_env\_path is the path to env folder inside the python installation or the actual python installation folder if you don't have a virtual environment. | 220 |
69,583,271 | Here is a toy example of my pandas dataframe:
```
country_market language_market
0 United States English
1 United States French
2 Not used Not used
3 Canada OR United States English
4 Germany English
5 United Kingdom French
6 United States German
7 United Kingdom English
8 United Kingdom English
9 Not used Not used
10 United States French
11 United States English
12 United Kingdom English
13 United States French
14 Not used English
15 Not used English
16 United States French
17 United States Not used
18 Not used English
19 United States German
```
I want to add a column `top_country` that shows whether the value in `country_market` is one of the top two most commonly seen countries in the data. If it is, I want the new `top_country` column show the value in `country_market` and if not, then I want it to show "Other". I want to repeat this process for`language_market` (and a whole load of other market columns I don't show here).
This is how I'd like the data to look after processing:
```
country_market language_market top_country top_language
0 United States English United States English
1 United States French United States French
2 Not used Not used Not used Other
3 Canada OR United States English Other English
4 Germany English Other English
5 United Kingdom French Other French
6 United States German United States Other
7 United Kingdom English Other English
8 United Kingdom English Other English
9 Not used Not used Not used Other
10 United States French United States French
11 United States English United States English
12 United Kingdom English Other English
13 United States French United States French
14 Not used English Not used English
15 Not used English Not used English
16 United States French United States French
17 United States Not used United States Other
18 Not used English Not used English
19 United States German United States Other
```
I made a function `original_top_markets_function` to do this, but I couldn't figure how to pass the `value_counts` part of my function to pandas `apply`. I kept getting `AttributeError: 'str' object has no attribute 'value_counts'`.
```
def original_top_markets_function(x):
top2 = x.value_counts().nlargest(2).index
for i in x:
if i in top2:
return i
else:
return 'Other'
```
I know this is because `apply` is looking at each element in my target column, but I also need the function to consider the whole column at once, so that I can use `value_counts`. I don't know how to do that.
So I have come up with this `top_markets` function as a solution, using a list, which does what I want, but isn't very efficient. I'll need to apply this function to lots of different market columns, so I'd like something more pythonic.
```
def top_markets(x):
top2 = x.value_counts().nlargest(2).index
results = []
for i in x:
if i in top2:
results.append(i)
else:
results.append('Other')
return results
```
Here's a reproducible example. Please can somehow help me fix my `top_markets` function so I can use it with `apply`?
```
import pandas as pd
d = {0: {'country_market': 'United States', 'language_market': 'English'},
1: {'country_market': 'United States', 'language_market': 'French'},
2: {'country_market': 'Not used', 'language_market': 'Not used'},
3: {'country_market': 'Canada OR United States',
'language_market': 'English'},
4: {'country_market': 'Germany', 'language_market': 'English'},
5: {'country_market': 'United Kingdom', 'language_market': 'French'},
6: {'country_market': 'United States', 'language_market': 'German'},
7: {'country_market': 'United Kingdom', 'language_market': 'English'},
8: {'country_market': 'United Kingdom', 'language_market': 'English'},
9: {'country_market': 'Not used', 'language_market': 'Not used'},
10: {'country_market': 'United States', 'language_market': 'French'},
11: {'country_market': 'United States', 'language_market': 'English'},
12: {'country_market': 'United Kingdom', 'language_market': 'English'},
13: {'country_market': 'United States', 'language_market': 'French'},
14: {'country_market': 'Not used', 'language_market': 'English'},
15: {'country_market': 'Not used', 'language_market': 'English'},
16: {'country_market': 'United States', 'language_market': 'French'},
17: {'country_market': 'United States', 'language_market': 'Not used'},
18: {'country_market': 'Not used', 'language_market': 'English'},
19: {'country_market': 'United States', 'language_market': 'German'}}
df = pd.DataFrame.from_dict(d, orient='index')
def top_markets(x):
top2 = x.value_counts().nlargest(2).index
results = []
for i in x:
if i in top2:
results.append(i)
else:
results.append('Other')
return results
df['top_country'] = top_markets(df['country_market'])
df['top_language'] = top_markets(df['language_market'])
df
``` | 2021/10/15 | [
"https://Stackoverflow.com/questions/69583271",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5269252/"
] | There is no such much-upgraded plugin now, which can help you. But for a workaround, we do have [volume\_watcher: ^2.0.1](https://pub.dev/packages/volume_watcher), which gives a callback when the volume is changed.
```
VolumeWatcher.addListener((volume) {
print("Current Volume :" + volume.toString());
})!;
```
***Note:*** Volume carries from 0 to 1, where 0 means no volume and 1 means max volume. | I needed the same functionality (listen to volume down, don't change volume when listening) and it didn't exist yet in Flutter so I made a plugin for it myself, you can find it here: <https://pub.dev/packages/flutter_android_volume_keydown>
It only works on Android because overriding iOS hardware buttons is not allowed by the app store guidelines. | 221 |
46,736,529 | How can I compute the time differential between two time zones in Python? That is, I don't want to compare TZ-aware `datetime` objects and get a `timedelta`; I want to compare two `TimeZone` objects and get an `offset_hours`. Nothing in the `datetime` library handles this, and neither does [`pytz`](https://pypi.python.org/pypi/pytz). | 2017/10/13 | [
"https://Stackoverflow.com/questions/46736529",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504550/"
] | Here's another solution:
```
from datetime import datetime
from pytz import timezone
from dateutil.relativedelta import relativedelta
utcnow = timezone('utc').localize(datetime.utcnow()) # generic time
here = utcnow.astimezone(timezone('US/Eastern')).replace(tzinfo=None)
there = utcnow.astimezone(timezone('Asia/Ho_Chi_Minh')).replace(tzinfo=None)
offset = relativedelta(here, there)
offset.hours
```
Here what we're doing is converting a time to two different time zones. Then, we remove the time zone information so that when you calculate the difference between the two using relativedelta, we trick it into thinking that these are two different moments in time instead of the same moment in different time zones.
The above result will return -11, however this amount can change throughout the year since US/Eastern observes DST and Asia/Ho\_Chi\_Minh does not. | ```
from datetime import datetime
from zoneinfo import ZoneInfo
dt = datetime.now() # 2020-09-13
tz0, tz1 = "Europe/Berlin", "US/Eastern" # +2 vs. -4 hours rel. to UTC
utcoff0, utcoff1 = dt.astimezone(ZoneInfo(tz0)).utcoffset(), dt.astimezone(ZoneInfo(tz1)).utcoffset()
print(f"hours offset between {tz0} -> {tz1} timezones: {(utcoff1-utcoff0).total_seconds()/3600}")
>>> hours offset between Europe/Berlin -> US/Eastern timezones: -6.0
```
* a way to do this with **Python 3.9**'s standard library. | 222 |
28,744,759 | I have a question concerning stdin buffer content inspection.
This acclaimed line of code:
```
int c; while((c = getchar()) != '\n' && c != EOF);
```
deals efficiently with discarding stdin-buffer garbage, in case there is a garbage found. In case the buffer is empty, the program execution wouldn't go past it.
Is there a way of checking if there is garbage in the stdin-buffer at all (no matter if it's there by user error, typeahead or whichever reason), and executing the "fflush-replacement line" from above only in case there is a garbage found?
I'd prefer to keep it programmatically all in plain-UNIX-flavor-of standard C, without having to use special parsing tools, no yacc, bison, python, ruby, shell scripts etc., no Windows API, please.
Thanks in advance!
**UPDATE:**
I hope this example tells a bit more of my question:
```
//...
//this line should make sure stdin buffer is free from accidentally typed content
int c; while (( c = getchar()) != '\n' && c != EOF);
//this line won't show in case buffer is already clean
printf("Please enter an arbitrary number of float or symbolic values:\n");
//this line should read the real input user is being asked for
char* p = fgets(text, TEXT_SIZE, stdin);
if(p != NULL)
parse_and_process(text);
//...
```
The problem happens when there is no accidental input. The "garbage" is here considered anything that may stay in the buffer at the moment *printf( )* prompt would appear. Is there a way of getting around the first line in case the buffer is already clean? | 2015/02/26 | [
"https://Stackoverflow.com/questions/28744759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3078414/"
] | >
> You can not give a background color into `include` tag.
>
>
>
**Why ?**
Its obvious , if you could able to give the background color to `include` tag then it would be all messed up with your `include` color and another color which might be applied to that `layout` which has already included .
However, you can also override all the layout parameters (any android:layout\_\* attributes) of the included layout's root view by specifying them in the tag.
(quoting from [https://developer.android.com/training/improving-layouts/reusing-layouts.html#Includ](https://developer.android.com/training/improving-layouts/reusing-layouts.html#Include) ) | Try this:
```
<include
android:id="@+id/list_item_section_text"
android:layout_width="fill_parent"
android:layout_height="match_parent"
layout="@android:layout/preference_category"/>
```
in preference category layout:
```
<LinearLayout
android:id="@+id/preference_category"
android:layout_width="fill_parent"
android:layout_height="match_parent"
android:background="@colors/white"/>
```
otherwise changes in RUNTIME
```
preference_category.setBackgroundResource(R.id.bckResource);
``` | 232 |