qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
62,376,571 | I want to read an array of integers from single line where size of array is given in python3.
Like read this to list.
```
5 //size
1 2 3 4 5 //input in one line
```
**while i have tried this**
```
arr = list(map(int, input().split()))
```
but dont succeed how to give size.
**Please help**
I am new to python 3 | 2020/06/14 | [
"https://Stackoverflow.com/questions/62376571",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12206760/"
] | Since, the framework itself has exposed a method to do something which can be done through vanilla javascript, it certainly has added advantages. One of the scenario I can think of is using React.forwardRef which can be used for:
* Forwarding refs to DOM components
* Forwarding refs in higher-order-components
As explained in the React docs itself:
```
const FancyButton = React.forwardRef((props, ref) => (
<button ref={ref} className="FancyButton">
{props.children}
</button>
));
// You can now get a ref directly to the DOM button:
const ref = React.createRef();
<FancyButton ref={ref}>Click me!</FancyButton>;
``` | you don't need react or angular to do any web development, angular and react give us a wrapper which will try to give us optimize reusable component,
all the component we are developing using react can be done by web-component but older browser don't support this.
**i am listing some of benefit of using ref in React**
1. this will be helpful for if you are using server side rendering, Be careful when you use Web API(s) since they won’t work on server-side Ex, document, Window, localStorage
2. you can easily listen to changes in state of object, you don't have
to maintain or query the DOM again and again and the framework will
do this for you. | 725 |
21,617,416 | I just started working with python + splinter
<http://splinter.cobrateam.info/docs/tutorial.html>
Unfortunately I can't get the example to work.
I cannot tell if:
```
browser.find_by_name('btnG')
```
is finding anything.
Second, I try to click the button with
button = browser.find\_by\_name('btnG').first
button.click()
This does not throw an error but NOTHING HAPPENS.
I tried again with the tutorial:
<http://f.souza.cc/2011/05/splinter-python-tool-for-acceptance.html>
and I got stuck again with the CLICK.
I am using python 2.7.3, and the latest stuff from splinter/selenium today.
How can I troublehsoot this problem (is anyone else having a problem)? | 2014/02/07 | [
"https://Stackoverflow.com/questions/21617416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1639926/"
] | When all else fails update firefox.
I upgraded my Jan 9, 2014 version and things could click! | The name of the button in my (Chromium) browser as of now is `'btnK'`. | 726 |
46,415,102 | I am running a nodejs server on port 8080, so my server can only process one request at a time.
I can see that if i send multiple requests in one single shot, new requests are queued and executed sequentially one after another.
What I am trying to find is, how do i run multiple instances/threads of this process. Example like gunicorn for python servers. Is there something similar, instead of running the nodejs server on multiple ports for each instance.
I have placed nginx infront of the node process. Is that sufficient and recommended method.
```
worker_processes auto;
worker_rlimit_nofile 1100;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
pid /var/run/nginx.pid;
http {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
server {
listen 80;
server_name localhost;
access_log /dev/null;
error_log /dev/null;
location / {
proxy_pass http://localhost:8080;
}
}
}
``` | 2017/09/25 | [
"https://Stackoverflow.com/questions/46415102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7999545/"
] | **First off, make sure your node.js process is ONLY using asynchronous I/O.** If it's not compute intensive and using asynchronous I/O, it should be able to have many different requests "in-flight" at the same time. The design of node.js is particularly good at this if your code is designed properly. If you show us the crux of what it is doing on one of these requests, we can advise more specifically on whether your server code is designed properly for best throughput.
**Second, instrument and measure, measure, measure.** Understand where your bottlenecks are in your existing node.js server and what is causing the delay or sequencing you see. Sometimes there are ways to dramatically fix/improve your bottlenecks before you start adding lots more clusters or servers.
**Third, use the [node.js cluster module](https://nodejs.org/api/cluster.html).** This will create one master node.js process that automatically balances between several child processes. You generally want to creates a cluster child for each actual CPU you have in your server computer since that will get you the most use out of your CPU.
**Fourth, if you need to scale to the point of multiple actual server computers, then you would use either a load balancer or reverse proxy such as nginx to share the load among multiple hosts.** If you had a quad core CPUs in your server, you could run a cluster with four node.js processes on it on each server computer and then use nginx to balance among the several server boxes you had.
Note that adding multiple hosts that are load balanced by nginx is the last option here, not the first option. | Like @poke said, you would use a reverse proxy and/or a load balancer in front.
But if you want a software to run multiple instances of node, with balancing and other stuffs, you should check pm2
<http://pm2.keymetrics.io/> | 727 |
37,883,759 | When running my python selenium script with Chrome driver I get about three of the below error messages every time a page loads even though everything works fine. Is there a way to suppress these messages?
>
> [24412:18772:0617/090708:ERROR:ssl\_client\_socket\_openssl.cc(1158)]
> handshake failed; returned -1, SSL error code 1, net\_error -100
>
>
> | 2016/06/17 | [
"https://Stackoverflow.com/questions/37883759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1998220/"
] | You get this error when the browser asks you to accept the certificate from a website. You can set to ignore these errors by default in order avoid these errors.
For Chrome, you need to add ***--ignore-certificate-errors*** and
***--ignore-ssl-errors*** ChromeOptions() argument:
```
options = webdriver.ChromeOptions()
options.add_argument('--ignore-certificate-errors')
options.add_argument('--ignore-ssl-errors')
driver = webdriver.Chrome(chrome_options=options)
```
For the Firefox, you need to set ***accept\_untrusted\_certs*** FirefoxProfile() option to True:
```
profile = webdriver.FirefoxProfile()
profile.accept_untrusted_certs = True
driver = webdriver.Firefox(firefox_profile=profile)
```
For the Internet Explorer, you need to set ***acceptSslCerts*** desired capability:
```
capabilities = webdriver.DesiredCapabilities().INTERNETEXPLORER
capabilities['acceptSslCerts'] = True
driver = webdriver.Ie(capabilities=capabilities)
``` | I was facing the same problem. The problem was I did set `webdriver.chrome.driver` system property to chrome.exe. But one should download `chromedriver.exe` and set the file path as a value to `webdriver.chrome.driver` system property.
Once this is set, everything started working fine. | 730 |
29,159,657 | I am a beginner in python. I want to know if there is any in-built function or other way so I can achieve below in python 2.7:
Find all **-letter** in list and sublist and replace it with **['not',letter]**
Eg: Find all items in below list starting with - and replace them with ['not',letter]
```
Input : ['and', ['or', '-S', 'Q'], ['or', '-S', 'R'], ['or', ['or', '-Q', '-R'], '-S']]
Output : ['and', ['or', ['not','S'], 'Q'], ['or', ['not','S'], 'R'], ['or', ['or', ['not','Q'], ['not','R']], ['not','S']]]
```
Can anyone suggest how to do it in python.
Thanks | 2015/03/20 | [
"https://Stackoverflow.com/questions/29159657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/960970/"
] | Try a bit of recursion:
```
def change(lol):
for index,item in enumerate(lol):
if isinstance(item, list):
change(item)
elif item.startswith('-'):
lol[index] = ['not',item.split('-')[1]]
return lol
```
In action:
```
In [24]: change(['and', ['or', '-S', 'Q'], ['or', '-S', 'R'], ['or', ['or', '-Q', '-R'], '-S']])
Out[24]:
['and',
['or', ['not', 'S'], 'Q'],
['or', ['not', 'S'], 'R'],
['or', ['or', ['not', 'Q'], ['not', 'R']], ['not', 'S']]]
``` | You need to use a recursive function.The `isinstance(item, str)` simply checks to see if an item is string.
```
def dumb_replace(lst):
for ind, item in enumerate(lst):
if isinstance(item, str):
if item.startswith('-'):
lst[ind] = ['not', 'letter']
else:
dumb_replace(item)
```
And:
```
dumb_replace(Input)
```
Gives:
```
['and', ['or', ['not', 'letter'], 'Q'], ['or', ['not', 'letter'], 'R'], ['or', ['or', ['not', 'letter'], ['not', 'letter']], ['not', 'letter']]]
``` | 740 |
14,286,200 | I'm building a website using pyramid, and I want to fetch some data from other websites. Because there may be 50+ calls of `urlopen`, I wanted to use gevent to speed things up.
Here's what I've got so far using gevent:
```
import urllib2
from gevent import monkey; monkey.patch_all()
from gevent import pool
gpool = gevent.pool.Pool()
def load_page(url):
response = urllib2.urlopen(url)
html = response.read()
response.close()
return html
def load_pages(urls):
return gpool.map(load_page, urls)
```
Running `pserve development.ini --reload` gives:
`NotImplementedError: gevent is only usable from a single thread`.
I've read that I need to monkey patch before anything else, but I'm not sure where the right place is for that. Also, is this a pserve-specific issue? Will I need to re-solve this problem when I move to [mod\_wsgi](http://docs.pylonsproject.org/projects/pyramid/en/latest/tutorials/modwsgi/index.html)? Or is there a way to handle this use-case (just urlopen) without gevent? I've seen suggestions for [requests](http://docs.python-requests.org/en/latest/) but I couldn't find an example of fetching multiple pages in the docs.
### Update 1:
I also tried eventlet from [this SO question](https://stackoverflow.com/a/2361129/312364) (almost directly copied from this eventlet [example](http://eventlet.net/doc/design_patterns.html#client-pattern)):
```
import eventlet
from eventlet.green import urllib2
def fetch(url):
return urllib2.urlopen(url).read()
def fetch_multiple(urls):
pool = eventlet.GreenPool()
return pool.imap(fetch, urls)
```
However when I call `fetch_multiple`, I'm getting `TypeError: request() got an unexpected keyword argument 'return_response'`
### Update 2:
The `TypeError` from the previous update was likely from earlier attempts to monkeypatch with gevent and not properly restarting pserve. Once I restarted everything, it works properly. Lesson learned. | 2013/01/11 | [
"https://Stackoverflow.com/questions/14286200",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | There are multiple ways to do what you want:
* Create a dedicated `gevent` thread, and explicitly dispatch all of your URL-opening jobs to that thread, which will then do the gevented `urlopen` requests.
* Use threads instead of greenlets. Running 50 threads isn't going to tax any modern OS.
* Use a thread pool and a queue. There's usually not much advantage to doing 50 downloads at the same time instead of, say, 8 at a time (as your browser probably does).
* Use a different async framework instead of `gevent`, one that doesn't work by magically greenletifying your code.
* Use a library that has its own non-magic async support, like `pycurl`.
* Instead of mixing and matching incompatible frameworks, build the server around `gevent` too, or find some other framework that works for both your web-serving and your web-client needs.
You could simulate the last one without changing frameworks by loading `gevent` first, and have it monkeypatch your threads, forcing your existing threaded server framework to become a `gevent` server. But this may not work, or mostly work but occasionally fail, or work but be much slower… Really, using a framework designed to be `gevent`-friendly (or at least greenlet-friendly) is a much better idea, if that's the way you want to go.
You mentioned that others had recommended `requests`. The reason you can't find the documentation is that the built-in async code in `requests` was removed. See, [an older version](http://docs.python-requests.org/en/v0.10.6/user/advanced/#asynchronous-requests) for how it was used. It's now available as a separate library, [`grequests`](https://github.com/kennethreitz/grequests). However, it works by implicitly wrapping `requests` with `gevent`, so it will have exactly the same issues as doing so yourself.
(There are other reasons to use `requests` instead of `urllib2`, and if you want to `gevent` it it's easier to use `grequests` than to do it yourself.) | I've had similar problems with gevent when trying to deploy a web application. The thing you could do that would take the least hassle is to use a WSGI deployment that runs on gevent; examples include gUnicorn, uWSGI, or one of gevent's built-in WSGI servers. Pyramid should have a way of using an alternate deployment. If large portions of your code rely on gevent, it's easier to just use a server that runs on gevent as well.
So, basically the last bullet on the above answer. | 743 |
11,267,347 | I have been [compiling diagrams](https://stackoverflow.com/questions/11253303/how-does-the-java-runtime-environment-compare-with-the-net-framework-in-terms-o) (pun intended) in hope of understanding the different implementations of common programming languages. I understand whether code is compiled or interpreted depends on the implementation of the code, and is not an aspect of the programming language itself.
I am interested in comparing Python interpretation with direct compilation (ex of C++)
[![enter image description here](https://i.stack.imgur.com/YEkNm.png)](https://i.stack.imgur.com/YEkNm.png)
and the virtual machine model (ex Java or C#)
[![enter image description here](https://i.stack.imgur.com/sxsw2.png)](https://i.stack.imgur.com/sxsw2.png)
In light of these two diagrams above, could you please help me develop a similar flowchart of how the .py file is converted to .pyc, uses the standard libraries (I gather they are called modules) and then actually run. Many programmers on SO indicate that python as a scripting language is not executed by the CPU but rather the interpreter, but that sounds quite impossible because ultimately hardware must be doing the computation. | 2012/06/29 | [
"https://Stackoverflow.com/questions/11267347",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1405543/"
] | For the reference implementation of python:
(.py) -> python (checks for .pyc) -> (.pyc) -> python (execution dynamically loads modules)
There are [other implementations](http://wiki.python.org/moin/PythonImplementations). Most notable are:
* [jython](http://www.jython.org/) which compiles (.py) to (.class) and follows the java pattern from there
* [pypy](http://pypy.org/) which employs a [JIT](http://en.wikipedia.org/wiki/Just-in-time_compilation) as it compiles (.py). the chain from there could vary (pypy could be run in cpython, jython or .net environments) | Python is technically a scripted language but it is also compiled, python source is taken from its source file and fed into the interpreter which often compiles the source to bytecode either internally and then throws it away or externally and saves it like a .pyc
Yes python is a single virtual machine that then sits ontop of the actual hardware but all python bytecode is, is a series of instructions for the pvm (python virtual machine) much like assembler for the actual CPU. | 744 |
57,395,610 | I'm creating a REST-API for my Django-App. I have a function, that returns a list of dictionaries, that I would like to serialize and return with the rest-api.
The list (nodes\_of\_graph) looks like this:
[{'id': 50, position: {'x': 99.0, 'y': 234.0}, 'locked': True}, {'id': 62, position: {'x': 27.0, 'y': 162.0}, 'locked': True}, {'id': 64, position: {'x': 27.0, 'y': 162.0}, 'locked': True}]
Since I'm a rookie to python, django and the Restframwork, I have no clue how to attempt this. Is anybody here, who knows how to tackle that?
somehow all my attempts to serialize this list have failed. I've tried with
```py
class Graph_Node_Serializer(serializers.ListSerializer):
class Nodes:
fields = (
'id',
'position',
'locked',
)
def nodes_for_graph(request, id):
serializer = Graph_Node_Serializer(nodes_of_graph)
return Response(serializer.data)
```
The result I hope for is an response of the django-rest-framwork, containing the data in the list of dictionaries. | 2019/08/07 | [
"https://Stackoverflow.com/questions/57395610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11895645/"
] | You can register shortcut events on the page (such as MainPage).
```cs
public MainPage()
{
this.InitializeComponent();
Window.Current.Dispatcher.AcceleratorKeyActivated += AccelertorKeyActivedHandle;
}
private async void AccelertorKeyActivedHandle(CoreDispatcher sender, AcceleratorKeyEventArgs args)
{
if (args.EventType.ToString().Contains("Down"))
{
var ctrl = Window.Current.CoreWindow.GetKeyState(Windows.System.VirtualKey.Control);
if (ctrl.HasFlag(CoreVirtualKeyStates.Down))
{
if (args.VirtualKey == Windows.System.VirtualKey.Number1)
{
// add the content in textbox
}
}
}
}
```
This registration method is global registration, you can run related functions when the trigger condition is met.
Best regards. | Try writing a function in your code which is triggered when a specific set of keys are pressed together.
For example, if you want to print an emoji when the user presses "Ctrl + 1",
write a function or a piece of code which is triggered when Ctrl and 1 are pressed
together and appends the text in the multiline-textbox with the emoji at the cursor
position.
I hope this will help. | 747 |
54,040,018 | I have a requirement of testing OSPF v2 and OSPF v3 routing protocols against their respective RFCs. Scapy module for python seems interesting solution to craft OSPF packets, but are there any open source OSPF libraries over scapy that one could use to create the test cases. Would appreciate any pointers in this direction. | 2019/01/04 | [
"https://Stackoverflow.com/questions/54040018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6192859/"
] | You should use the usual `tput` program for producing the correct escape sequences for the actual terminal, rather than hard-coding specific strings (that look ugly in an Emacs compilation buffer, for example):
```
printf-bold-1:
@printf "normal text - `tput bold`bold text`tput sgr0`"
.PHONY: printf-bold-1
```
Of course, you may store the result into a Make variable, to reduce the number of subshells:
```
bold := $(shell tput bold)
sgr0 := $(shell tput sgr0)
printf-bold-1:
@printf 'normal text - $(bold)bold text$(sgr0)'
.PHONY: printf-bold-1
``` | Ok, I got it. I should have used `\033` instead of `\e` or `\x1b` :
```
printf-bold-1:
@printf "normal text - \033[1mbold text\033[0m"
```
Or, as suggested in the comments, use simple quotes instead of double quotes :
```
printf-bold-1:
@printf 'normal text - \e[1mbold text\e[0m'
```
`make printf-bold-1` now produces :
>
> normal text - **bold text**
>
>
> | 748 |
1,171,926 | I'm trying to program a pyramid like score system for an ARG game and have come up with a problem. When users get into the game they start a new "pyramid" but if one start the game with a referer code from another player they become a child of this user and then kick points up the ladder.
The issue here is not the point calculation, I've gotten that right with some good help from you guys, but if a user gets more point that it parent, they should switch places in the ladder. So that a users parent becomes it's child and so on.
The python code I have now doesnt work proper, and I dont really know why.
```
def verify_parents(user):
"""
This is a recursive function that checks to see if any of the parents
should bump up because they got more points than its parent.
"""
from rottenetter.accounts.models import get_score
try:
parent = user.parent
except:
return False
else:
# If this players score is greater than its parent
if get_score(user) > get_score(parent):
# change parent
user.parent = parent.parent
user.save()
# change parent's parent to current profile
parent.parent = user
parent.save()
verify_parents(parent)
```
In my mind this should work, if a user has a parent, check to see if the user got more points than its parent, if so, set the users parent to be the parents parent, and set the parents parent to be the user, and then they have switched places. And after that, call the same function with the parent as a target so that it can check it self, and then continue up the ladder.
But this doesnt always work, in some cases people aren't bumbed to the right position of some reason.
Edit:
When one users steps up or down a step in the ladder, it's childs move with him so they still relate to the same parent, unless they to get more points and step up. So it should be unecessary to anything with the parents shouldn't it? | 2009/07/23 | [
"https://Stackoverflow.com/questions/1171926",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/42546/"
] | In C, that would have been more or less legal.
In C++, functions typically shouldn't do that. You should try to use [RAII](http://en.wikipedia.org/wiki/RAII) to guarantee memory doesn't get leaked.
And now you might say "how would it leak memory, I call `delete[]` just there!", but what if an exception is thrown at the `// ...` lines?
Depending on what exactly the functions are meant to do, you have several options to consider. One obvious one is to replace the array with a vector:
```
std::vector<char> f();
std::vector<char> data = f();
int data_length = data.size();
// ...
//delete[] data;
```
and now we no longer need to explicitly delete, because the vector is allocated on the stack, and its destructor is called when it goes out of scope.
I should mention, in response to comments, that the above implies a *copy* of the vector, which could potentially be expensive. Most compilers will, if the `f` function is not too complex, optimize that copy away so this will be fine. (and if the function isn't called too often, the overhead won't matter *anyway*). But if that doesn't happen, you could instead pass an empty array to the `f` function by reference, and have `f` store its data in that instead of returning a new vector.
If the performance of returning a copy is unacceptable, another alternative would be to decouple the choice of container entirely, and use iterators instead:
```
// definition of f
template <typename iter>
void f(iter out);
// use of f
std::vector<char> vec;
f(std::back_inserter(vec));
```
Now the usual iterator operations can be used (`*out` to reference or write to the current element, and `++out` to move the iterator forward to the next element) -- and more importantly, all the standard algorithms will now work. You could use `std::copy` to copy the data to the iterator, for example. This is the approach usually chosen by the standard library (ie. it is a good idea;)) when a function has to return a sequence of data.
Another option would be to make your own object taking responsibility for the allocation/deallocation:
```
struct f { // simplified for the sake of example. In the real world, it should be given a proper copy constructor + assignment operator, or they should be made inaccessible to avoid copying the object
f(){
// do whatever the f function was originally meant to do here
size = ???
data = new char[size];
}
~f() { delete[] data; }
int size;
char* data;
};
f data;
int data_length = data.size;
// ...
//delete[] data;
```
And again we no longer need to explicitly delete because the allocation is managed by an object on the stack. The latter is obviously more work, and there's more room for errors, so if the standard vector class (or other standard library components) do the job, prefer them. This example is only if you need something customized to your situation.
The general rule of thumb in C++ is that "if you're writing a `delete` or `delete[]` outside a RAII object, you're doing it wrong. If you're writing a `new` or `new[] outside a RAII object, you're doing it wrong, unless the result is immediately passed to a smart pointer" | Use RAII (Resource Acquisition Is Initialization) design pattern.
<http://en.wikipedia.org/wiki/RAII>
[Understanding the meaning of the term and the concept - RAII (Resource Acquisition is Initialization)](https://stackoverflow.com/questions/712639/please-help-us-non-c-developers-understand-what-raii-is) | 749 |
52,019,077 | ```
from bs4 import BeautifulSoup
import requests
url = "https://www.104.com.tw/job/?jobno=5mjva&jobsource=joblist_b_relevance"
r = requests.get(url)
r.encoding = "utf-8"
print(r.text)
```
I want to reach the content in div ("class=content")(p)
but when I print the r.text out there's a big part disappear.
But I also found if I open a text file and write it in, it would be just right in the notebook
```
doc = open("file104.txt", "w", encoding="utf-8")
doc.write(r.text)
doc.close()
```
I guess it might be the encoding problem? But it is still not working after I encoded in utf-8.
Sorry everbody!
===========================================================================
I finally found the problem which comes from the Ipython IDLE, everthing would be fine if I run the code in powershell, I should try this earlier....
But still wanna know why cause this problem! | 2018/08/25 | [
"https://Stackoverflow.com/questions/52019077",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10273637/"
] | in your table you could have a field for count. When use login and login is wrong, add + 1 to your count. When user login successfuly, reset the count. If count meet +3, reset the code. | i understand from your question that you need the logic on how to make the random\_code expired after inserting from interacted users on your website 3 times ,assuming that , as long as the code is not expired he will be able to do his inserts and you may load it on your page .
i would do that through database queries .
**Please follow this instruction listed below**
**instructions :**
while your php page generate the random code , you may store it in database table with a auto reference key , for instance ,
assuming that you have randomly generated a code as below :
"Some random code here"
the above code which was generated by your php page have load it from mysql table called Random\_Generated\_Code , i would go to edit this table and add new field in it and call it generated\_Code\_Reference\_Key ( could be auto serial number ) to avoid any duplication as well make additional field called Expire\_Flag which we are going to use later.
so once your page have loaded the above example code , you should retrieve the generated\_Code\_Reference\_Key along with it and keep it in hidden variable on your page
it should be loaded on the page based on expire\_Flag value as a condition
select generated\_code from Random\_Generated\_Code where expire\_flag = ""
now once the user try to insert that generated code , in each time he insert it define another table in your database lets call it ( inserted\_Codes\_by\_users) and store in it the username of whoever is doing that on your website as well you have to store the generated\_Code\_Reference\_Key which we are storing in hidden variable as mentioned earlier to indicate which code was used while inserting.
now during page load or any event you want you can find expired code by make select statement from the inserted\_Codes\_by\_users table
select count(generated\_Code\_Reference\_Key) as The\_Code\_Used\_Qty from inserted\_Codes\_by\_users where username = username\_of\_that\_user
so you can get how many times this user have inserted this specific generated\_random\_Code
retrieve result of the query in a variable and to make sense lets call it The\_Code\_Used\_Qty and make if condition on page load event or any event you like
if The\_Code\_Used\_Qty = 3 then
fire update statement to first table which loaded that random generated code
and update the expire\_flag field for that code (Expired) based on reference\_key
update Random\_Generated\_Code set expire\_Flag = "expired" where generated\_Code\_Reference\_Key = "generated\_Code\_Reference\_Key" << the one u stored in hidden variable
end if
so now that will get you directly to the point of why we are loading random\_generated\_code table first time with that condition expire\_flag = ""
as it will only retrieve the codes which is not expired .
hopefully this will help you to achieve what you want .
good luck and let me know if you need any help or if you face any confusion while reading my answer.
Good luck . | 759 |
25,165,500 | I'm trying to get zipline working with non-US, intraday data, that I've loaded into a pandas DataFrame:
```
BARC HSBA LLOY STAN
Date
2014-07-01 08:30:00 321.250 894.55 112.105 1777.25
2014-07-01 08:32:00 321.150 894.70 112.095 1777.00
2014-07-01 08:34:00 321.075 894.80 112.140 1776.50
2014-07-01 08:36:00 321.725 894.80 112.255 1777.00
2014-07-01 08:38:00 321.675 894.70 112.290 1777.00
```
I've followed moving-averages tutorial [here](http://nbviewer.ipython.org/github/quantopian/zipline/blob/master/docs/tutorial.ipynb), replacing "AAPL" with my own symbol code, and the historical calls with "1m" data instead of "1d".
Then I do the final call using `algo_obj.run(DataFrameSource(mydf))`, where `mydf` is the dataframe above.
However there are all sorts of problems arising related to [TradingEnvironment](https://github.com/quantopian/zipline/blob/master/zipline/finance/trading.py). According to the source code:
```
# This module maintains a global variable, environment, which is
# subsequently referenced directly by zipline financial
# components. To set the environment, you can set the property on
# the module directly:
# from zipline.finance import trading
# trading.environment = TradingEnvironment()
#
# or if you want to switch the environment for a limited context
# you can use a TradingEnvironment in a with clause:
# lse = TradingEnvironment(bm_index="^FTSE", exchange_tz="Europe/London")
# with lse:
# the code here will have lse as the global trading.environment
# algo.run(start, end)
```
However, using the context doesn't seem to fully work. I still get errors, for example stating that my timestamps are before the market open (and indeed, looking at `trading.environment.open_and_close` the times are for the US market.
**My question is, has anybody managed to use zipline with non-US, intra-day data?** Could you point me to a resource and ideally example code on how to do this?
n.b. I've seen there are some [tests](https://github.com/quantopian/zipline/blob/master/tests/test_tradingcalendar.py) on github that seem related to the trading calendars (tradincalendar\_lse.py, tradingcalendar\_tse.py , etc) - but this appears to only handle data at the daily level. I would need to fix:
* open/close times
* reference data for the benchmark
* and probably more ... | 2014/08/06 | [
"https://Stackoverflow.com/questions/25165500",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2196034/"
] | I've got this working after fiddling around with the tutorial notebook. Code sample below. It's using the DF `mid`, as described in the original question. A few points bear mentioning:
1. **Trading Calendar** I create one manually and assign to `trading.environment`, by using non\_working\_days in *tradingcalendar\_lse.py*. Alternatively you could create one that fits your data exactly (however could be a problem for out-of-sample data). There are two fields that you need to define: `trading_days` and `open_and_closes`.
2. **sim\_params** There is a problem with the default start/end values because they aren't timezone aware. So you *must* create a sim\_params object and pass start/end parameters with a timezone.
3. Also, `run()` must be called with the argument overwrite\_sim\_params=False as `calculate_first_open`/`close` raise timestamp errors.
I should mention that it's also possible to pass pandas Panel data, with fields open,high,low,close,price and volume in the minor\_axis. But in this case, the former fields are mandatory - otherwise errors are raised.
Note that this code only produces a *daily* summary of the performance. I'm sure there must be a way to get the result at a minute resolution (I thought this was set by `emission_rate`, but apparently it's not). If anybody knows please comment and I'll update the code.
Also, not sure what the api call is to call 'analyze' (i.e. when using `%%zipline` magic in IPython, as in the tutorial, the `analyze()` method gets automatically called. How do I do this manually?)
```py
import pytz
from datetime import datetime
from zipline.algorithm import TradingAlgorithm
from zipline.utils import tradingcalendar
from zipline.utils import tradingcalendar_lse
from zipline.finance.trading import TradingEnvironment
from zipline.api import order_target, record, symbol, history, add_history
from zipline.finance import trading
def initialize(context):
# Register 2 histories that track daily prices,
# one with a 100 window and one with a 300 day window
add_history(10, '1m', 'price')
add_history(30, '1m', 'price')
context.i = 0
def handle_data(context, data):
# Skip first 30 mins to get full windows
context.i += 1
if context.i < 30:
return
# Compute averages
# history() has to be called with the same params
# from above and returns a pandas dataframe.
short_mavg = history(10, '1m', 'price').mean()
long_mavg = history(30, '1m', 'price').mean()
sym = symbol('BARC')
# Trading logic
if short_mavg[sym] > long_mavg[sym]:
# order_target orders as many shares as needed to
# achieve the desired number of shares.
order_target(sym, 100)
elif short_mavg[sym] < long_mavg[sym]:
order_target(sym, 0)
# Save values for later inspection
record(BARC=data[sym].price,
short_mavg=short_mavg[sym],
long_mavg=long_mavg[sym])
def analyze(context,perf) :
perf["pnl"].plot(title="Strategy P&L")
# Create algorithm object passing in initialize and
# handle_data functions
# This is needed to handle the correct calendar. Assume that market data has the right index for tradeable days.
# Passing in env_trading_calendar=tradingcalendar_lse doesn't appear to work, as it doesn't implement open_and_closes
from zipline.utils import tradingcalendar_lse
trading.environment = TradingEnvironment(bm_symbol='^FTSE', exchange_tz='Europe/London')
#trading.environment.trading_days = mid.index.normalize().unique()
trading.environment.trading_days = pd.date_range(start=mid.index.normalize()[0],
end=mid.index.normalize()[-1],
freq=pd.tseries.offsets.CDay(holidays=tradingcalendar_lse.non_trading_days))
trading.environment.open_and_closes = pd.DataFrame(index=trading.environment.trading_days,columns=["market_open","market_close"])
trading.environment.open_and_closes.market_open = (trading.environment.open_and_closes.index + pd.to_timedelta(60*7,unit="T")).to_pydatetime()
trading.environment.open_and_closes.market_close = (trading.environment.open_and_closes.index + pd.to_timedelta(60*15+30,unit="T")).to_pydatetime()
from zipline.utils.factory import create_simulation_parameters
sim_params = create_simulation_parameters(
start = pd.to_datetime("2014-07-01 08:30:00").tz_localize("Europe/London").tz_convert("UTC"), #Bug in code doesn't set tz if these are not specified (finance/trading.py:SimulationParameters.calculate_first_open[close])
end = pd.to_datetime("2014-07-24 16:30:00").tz_localize("Europe/London").tz_convert("UTC"),
data_frequency = "minute",
emission_rate = "minute",
sids = ["BARC"])
algo_obj = TradingAlgorithm(initialize=initialize,
handle_data=handle_data,
sim_params=sim_params)
# Run algorithm
perf_manual = algo_obj.run(mid,overwrite_sim_params=False) # overwrite == True calls calculate_first_open[close] (see above)
``` | @Luciano
You can add `analyze(None, perf_manual)`at the end of your code for automatically running the analyze process. | 760 |
54,119,766 | I am using python2.7
I have a json i pull that is always changing when i request it.
I need to pull out `Animal_Target_DisplayNam`e under Term7 Under Relation6 in my dict.
The problem is sometimes the object Relation6 is in another part of the Json, it could be leveled deeper or in another order.
I am trying to create code that can just export the values of the key `Animal_Target_DisplayName` but nothing is working. It wont even loop down the nested dict.
Now this can work if i just pull it out using something like `['view']['Term0'][0]['Relation6']` but remember the JSON is never returned in the same structure.
Code i am using to get the values of the key `Animal_Target_DisplayName` but it doesnt seem to loop through my dict and find all the values with that key.
```
array = []
for d in dict.values():
row = d['Animal_Target_DisplayName']
array.append(row)
```
JSON Below:
```
dict = {
"view":{
"Term0":[
{
"Id":"b0987b91-af12-4fe3-a56f-152ac7a4d84d",
"DisplayName":"Dog",
"FullName":"Dog",
"AssetType1":[
{
"AssetType_Id":"00000000-0000-0000-0000-000000031131",
}
]
},
{
"Id":"ee74a59d-fb74-4052-97ba-9752154f015d",
"DisplayName":"Dog2",
"FullName":"Dog",
"AssetType1":[
{
"AssetType_Id":"00000000-0000-0000-0000-000000031131",
}
]
},
{
"Id":"eb548eae-da6f-41e8-80ea-7e9984f56af6",
"DisplayName":"Dog3",
"FullName":"Dog3",
"AssetType1":[
{
"AssetType_Id":"00000000-0000-0000-0000-000000031131",
}
]
},
{
"Id":"cfac6dd4-0efa-4417-a2bf-0333204f8a42",
"DisplayName":"Animal Set",
"FullName":"Animal Set",
"AssetType1":[
{
"AssetType_Id":"00000000-0000-0000-0001-000400000001",
}
],
"StringAttribute2":[
{
"StringAttribute_00000000-0000-0000-0000-000000003114_Id":"00a701a8-be4c-4b76-a6e5-3b0a4085bcc8",
"StringAttribute_00000000-0000-0000-0000-000000003114_Value":"Desc"
}
],
"StringAttribute3":[
{
"StringAttribute_00000000-0000-0000-0000-000000000262_Id":"a81adfb4-7528-4673-8c95-953888f3b43a",
"StringAttribute_00000000-0000-0000-0000-000000000262_Value":"meow"
}
],
"BooleanAttribute4":[
{
"BooleanAttribute_00000000-0000-0000-0001-000500000001_Id":"932c5f97-c03f-4a1a-a0c5-a518f5edef5e",
"BooleanAttribute_00000000-0000-0000-0001-000500000001_Value":"true"
}
],
"SingleValueListAttribute5":[
{
"SingleValueListAttribute_00000000-0000-0000-0001-000500000031_Id":"ef51dedd-6f25-4408-99a6-5a6cfa13e198",
"SingleValueListAttribute_00000000-0000-0000-0001-000500000031_Value":"Blah"
}
],
"Relation6":[
{
"Animal_Id":"2715ca09-3ced-4b74-a418-cef4a95dddf1",
"Term7":[
{
"Animal_Target_Id":"88fd0090-4ea8-4ae6-b7f0-1b13e5cf3d74",
"Animal_Target_DisplayName":"Animaltheater",
"Animal_Target_FullName":"Animaltheater"
}
]
},
{
"Animal_Id":"6068fe78-fc8e-4542-9aee-7b4b68760dcd",
"Term7":[
{
"Animal_Target_Id":"4e87a614-2a8b-46c0-90f3-8a0cf9bda66c",
"Animal_Target_DisplayName":"Animaltitle",
"Animal_Target_FullName":"Animaltitle"
}
]
},
{
"Animal_Id":"754ec0e6-19b6-4b6b-8ba1-573393268257",
"Term7":[
{
"Animal_Target_Id":"a8986ed5-3ec8-44f3-954c-71cacb280ace",
"Animal_Target_DisplayName":"Animalcustomer",
"Animal_Target_FullName":"Animalcustomer"
}
]
},
{
"Animal_Id":"86b3ffd1-4d54-4a98-b25b-369060651bd6",
"Term7":[
{
"Animal_Target_Id":"89d02067-ebe8-4b87-9a1f-a6a0bdd40ec4",
"Animal_Target_DisplayName":"Animalfact_transaction",
"Animal_Target_FullName":"Animalfact_transaction"
}
]
},
{
"Animal_Id":"ea2e1b76-f8bc-46d9-8ebc-44ffdd60f213",
"Term7":[
{
"Animal_Target_Id":"e398cd32-1e73-46bd-8b8f-d039986d6de0",
"Animal_Target_DisplayName":"Animalfact_transaction",
"Animal_Target_FullName":"Animalfact_transaction"
}
]
}
],
"Relation10":[
{
"TargetRelation_b8b178ff-e957-47db-a4e7-6e5b789d6f03_Id":"aff80bd0-a282-4cf5-bdcc-2bad35ddec1d",
"Term11":[
{
"AnimalId":"3ac22167-eb91-469a-9d94-315aa301f55a",
"AnimalDisplayName":"Animal",
"AnimalFullName":"Animal"
}
]
}
],
"Tag12":[
{
"Tag_Id":"75968ea6-4c9f-43c9-80f7-dfc41b24ec8f",
"Tag_Name":"AnimalAnimaltitle"
},
{
"Tag_Id":"b1adbc00-aeef-415b-82b6-a3159145c60d",
"Tag_Name":"Animal2"
},
{
"Tag_Id":"5f78e4dc-2b37-41e0-a0d3-cec773af2397",
"Tag_Name":"AnimalDisplayName"
}
]
}
]
}
}
```
The output i am trying to get is a list of all the values from key `Animal_Target_DisplayName` like this `['Animaltheater','Animaltitle', 'Animalcustomer', 'Animalfact_transaction', 'Animalfact_transaction']` but we need to remember the nested structure of this json always changes but the keys for it are always the same. | 2019/01/09 | [
"https://Stackoverflow.com/questions/54119766",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6856433/"
] | I guess your only option is running through the entire dict and get the values of `Animal_Target_DisplayName` key, I propose the following recursive solution:
```py
def run_json(dict_):
animal_target_sons = []
if type(dict_) is list:
for element in dict_:
animal_target_sons.append(run_json(element))
elif type(dict_) is dict:
for key in dict_:
if key=="Animal_Target_DisplayName":
animal_target_sons.append([dict_[key]])
else:
animal_target_sons.append(run_json(dict_[key]))
return [x for sublist in animal_target_sons for x in sublist]
run_json(dict_)
```
Then calling `run_json` returns a list with what you want. By the way, I recommend you to rename your json from `dict` to, for example `dict_`, since `dict` is a reserved word of Python for the dictionary type. | Since you're getting JSON, why not make use of the json module? That will do the parsing for you and allow you to use dictionary functions+features to get the information you need.
```
#!/usr/bin/python2.7
from __future__ import print_function
import json
# _somehow_ get your JSON in as a string. I'm calling it "jstr" for this
# example.
# Use the module to parse it
jdict = json.loads(jstr)
# our dict has keys...
# view -> Term0 -> keys-we're-interested-in
templist = jdict["view"]["Term0"]
results = {}
for _el in range(len(templist)):
if templist[_el]["FullName"] == "Animal Set":
# this is the one we're interested in - and it's another list
moretemp = templist[_el]["Relation6"]
for _k in range(len(moretemp)):
term7 = moretemp[_k]["Term7"][0]
displayName = term7["Animal_Target_DisplayName"]
fullName = term7["Animal_Target_FullName"]
results[fullName] = displayName
print("{0}".format(results))
```
Then you can dump the `results` dict plain, or with pretty-printing:
```
>>> print(json.dumps(results, indent=4))
{
"Animaltitle2": "Animaltitle2",
"Animalcustomer3": "Animalcustomer3",
"Animalfact_transaction4": "Animalfact_transaction4",
"Animaltheater1": "Animaltheater1"
}
``` | 761 |
64,311,719 | I just started learning Selenium and need to verify a login web-page using a jenkins machine in the cloud, which doesn't have a GUI. I managed to run the script successfully on my system which has a UI. However when I modified the script to run headless, it fails saying unable to locate element.
My script is as follows:
```
#!/usr/bin/env python3
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from webdriver_manager.chrome import ChromeDriverManager
import time
import argparse
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--window-size=1120, 550')
chrome_options.add_argument('--headless')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--allow-running-insecure-content')
driver = webdriver.Chrome(ChromeDriverManager().install(), chrome_options=chrome_options)
driver.implicitly_wait(5)
lhip = '13.14.15.16'
user = 'username'
paswd = 'password'
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--lh_ip', type=str, metavar='', default=lhip, help='Public IP of VM' )
parser.add_argument('-u', '--usr', type=str, metavar='', default=user, help='Username for VM')
parser.add_argument('-p', '--pwd', type=str, metavar='', default=paswd, help='Password for VM')
args = parser.parse_args()
lh_url = 'https://' + args.lh_ip + '/login/'
driver.get(lh_url)
try:
if driver.title == 'Privacy error':
driver.find_element_by_id('details-button').click()
driver.find_element_by_id('proceed-link').click()
except:
pass
driver.find_element_by_id('username').send_keys(args.usr)
driver.find_element_by_id('password').send_keys(args.pwd)
driver.find_element_by_id('login-btn').click()
driver.implicitly_wait(10)
try:
if driver.find_element_by_tag_name('span'):
print('Login Failed')
except:
print('Login Successful')
driver.close()
```
The python script works fine on my system when used without the chrome\_options. However upon adding them to run in headless mode, it fails with the following output:
```
[WDM] - Current google-chrome version is 85.0.4183
[WDM] - Get LATEST driver version for 85.0.4183
[WDM] - Driver [/home/ramesh/.wdm/drivers/chromedriver/linux64/85.0.4183.87/chromedriver] found in cache
Traceback (most recent call last):
File "/home/ramesh/practice_python/test_headless.py", line 44, in <module>
driver.find_element_by_id('username').send_keys(args.usr)
File "/home/ramesh/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 360, in find_element_by_id
return self.find_element(by=By.ID, value=id_)
File "/home/ramesh/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 978, in find_element
'value': value})['value']
File "/home/ramesh/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/home/ramesh/.local/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="username"]"}
(Session info: headless chrome=85.0.4183.121)
```
Since I have about one day's learning of Selenium, I may be doing something rather silly, so would be very grateful if someone showed me what I've done wrong. I've googled a lot and tried many things but none worked.
Also why is it saying "css selector" when I have only used id for username? | 2020/10/12 | [
"https://Stackoverflow.com/questions/64311719",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9953181/"
] | If the script is working perfectly fine without headless mode, probably there is issue with the window size. Along with specifying --no-sandbox option, try changing the window size passed to the webdriver
chrome\_options.add\_argument('--window-size=1920,1080')
This window size worked in my case.
Even if this dosen't work you might need to add wait timers as answered before as rendering in headless mode works in a different way as compared to a browser in UI mode.
Ref for rendering in headless mode - <https://www.toolsqa.com/selenium-webdriver/selenium-headless-browser-testing/> | I would refactor code in a way to wait until elements will be present on a web page:
```
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
WebDriverWait(wd, 10).until(EC.presence_of_element_located((By.ID, 'username'))).send_keys(args.usr)
WebDriverWait(wd, 10).until(EC.presence_of_element_located((By.ID,'password'))).send_keys(args.pwd)
WebDriverWait(wd, 10).until(EC.presence_of_element_located((By.ID, 'login-btn'))).click()
```
Generally usage of `WebDriverWait` in combination with some condition should be preferred to implicit waits or `time.sleep()`. [Here](https://stackoverflow.com/a/28067495/2792888) is explained in details why.
Other thing to double check are whether elements have the IDs used for search and that these elements aren't located within an iframe. | 762 |
15,642,581 | I've installed numpy and when I go to install Matplotlib it fails. Regardless of the method I use to install it. Below are the errors I receive.
```
gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -arch i386 -arch x86_64 -g -O2 -
DNDEBUG -g -O3 -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -
I/usr/local/include -I/usr/include -I/usr/X11/include -I/opt/local/include -
I/usr/local/include -I/usr/include -I. -
I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include -I. -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/_png.cpp -o build/temp.macosx-10.6-intel-2.7/src/_png.o
src/_png.cpp:23:20: error: png.h: No such file or directory
src/_png.cpp:66: error: variable or field ‘write_png_data’ declared void
src/_png.cpp:66: error: ‘png_structp’ was not declared in this scope
src/_png.cpp:66: error: ‘png_bytep’ was not declared in this scope
src/_png.cpp:66: error: ‘png_size_t’ was not declared in this scope
src/_png.cpp:23:20: error: png.h: No such file or directory
src/_png.cpp:66: error: variable or field ‘write_png_data’ declared void
src/_png.cpp:66: error: ‘png_structp’ was not declared in this scope
src/_png.cpp:66: error: ‘png_bytep’ was not declared in this scope
src/_png.cpp:66: error: ‘png_size_t’ was not declared in this scope
lipo: can't figure out the architecture type of:
/var/folders/c9/xzv35t2n3ld9lgjrtl0vd0xr0000gn/T//ccwRj4ny.out
error: command 'gcc-4.2' failed with exit status 1
----------------------------------------
Command /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -
c "import setuptools;__file__='/var/folders/c9/xzv35t2n3ld9lgjrtl0vd0xr0000gn/T/pip-
build/matplotlib/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'),
__file__, 'exec'))" install --record /var/folders/c9/xzv35t2n3ld9lgjrtl0vd0xr0000gn/T/pip-
udXluz-record/install-record.txt --single-version-externally-managed failed with error
code 1 in /var/folders/c9/xzv35t2n3ld9lgjrtl0vd0xr0000gn/T/pip-build/matplotlib
Storing complete log
``` | 2013/03/26 | [
"https://Stackoverflow.com/questions/15642581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1238230/"
] | ```
public static void Display_Grid(DataGrid d, List<string> S1)
{
ds = new DataSet();
DataTable dt = new DataTable();
ds.Tables.Add(dt);
DataColumn cl = new DataColumn("Item Number", typeof(string));
cl.MaxLength = 200;
dt.Columns.Add(cl);
int i = 0;
foreach (string s in S1)
{
DataRow rw = dt.NewRow();
rw["Item Number"] = S1[i];
i++;
}
d.ItemsSource = ds.Tables[0].AsDataView();
}
``` | add new row in datagrid using observablecollection ItemCollection
```
itemmodel model=new itemmodel ();
model.name='Rahul';
ItemCollection.add(model);
``` | 770 |
60,358,982 | I am getting an **Internal Server Error** and not sure if i need to change something in wsgi.
The app was working fine while tested on virtual environment on port 8000.
I followed all the steps using the tutorial <https://www.youtube.com/watch?v=Sa_kQheCnds>
the apache error log shows the following :
```
[Sun Feb 23 02:13:47.329729 2020] [wsgi:error] [pid 2544:tid 140477402474240] [remote xx.xx.xx.xx:59870] mod_wsgi (pid=2544): Target WSGI script '/home/recprydjango/rec$
[Sun Feb 23 02:13:47.329817 2020] [wsgi:error] [pid 2544:tid 140477402474240] [remote xx.xx.xx.xx:59870] mod_wsgi (pid=2544): Exception occurred processing WSGI script $
[Sun Feb 23 02:13:47.330088 2020] [wsgi:error] [pid 2544:tid 140477402474240] [remote xx.xx.xx.xx:59870] Traceback (most recent call last):
[Sun Feb 23 02:13:47.330125 2020] [wsgi:error] [pid 2544:tid 140477402474240] [remote xx.xx.xx.xx:59870] File "/home/recprydjango/recipe/app/wsgi.py", line 12, in <mo$
[Sun Feb 23 02:13:47.330130 2020] [wsgi:error] [pid 2544:tid 140477402474240] [remote xx.xx.xx.xx:59870] from django.core.wsgi import get_wsgi_application
[Sun Feb 23 02:13:47.330148 2020] [wsgi:error] [pid 2544:tid 140477402474240] [remote xx.xx.xx.xx:59870] ModuleNotFoundError: No module named 'django'
```
I have the following structure
```
(venv) recprydjango@recpry-django:~/recipe$ tree
.
├── app
│ ├── __init__.py
│ ├── __pycache__
│ │ ├── ...
│ │ └── wsgi.cpython-37.pyc
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── db.sqlite3
├── manage.py
├── media
│ └── images
│ ├── chocolatecake.png
│ └── ...
├── recipe
│ ├── admin.py
│ ├── apps.py
│ ├── forms.py
│ ├── __init__.py
│ ├── models.py
│ ├── __pycache__
│ │ ├── ...
│ │ └── views.cpython-37.pyc
│ ├── tests.py
│ └── views.py
├── requirements.txt
├── static
│ ├── admin
│ │ ├── css/...
│ │ ├── fonts/...
│ │ ├── img/...
│ │ └── js/...
│ └── smart-selects/...
├── templates
│ └── home.html
└── venv
├── bin
│ ├── activate
│ ├── activate.csh
│ ├── activate.fish
│ ├── easy_install
│ ├── easy_install-3.6
│ ├── pip
│ ├── pip3
│ ├── pip3.6
│ ├── python -> python3
│ └── python3 -> /usr/bin/python3
├── include
├── lib
│ └── python3.6
│ └── site-packages
```
settings.py
```
import os
import json
with open('/etc/config.json') as config_file:
config = json.load(config_file)
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = config['SECRET_KEY']
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
ALLOWED_HOSTS = ['xx.xx.xx.xx']
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
#--- ADDED APP recipe HERE !!!!
'recipe',
#--- ADDED Smart_Selects HERE !!
'smart_selects',
#Bootstap
'crispy_forms',
'widget_tweaks',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'app.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
# --- TEMPLATE DIRECTORY
'DIRS': [os.path.join(BASE_DIR, "templates")],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'app.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
STATIC_URL = '/static/'
#--- ADDED THIS SECTION TO UPLOAD PHOTOS !!!! ---
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
#------------------------------------------------
#--- ADDED THIS SECTION FOR SMART SELECTS !!! ---
USE_DJANGO_JQUERY = True
#------------------------------------------------
CRISPY_TEMPLATE_PACK = 'bootstrap4'
```
wsgi.py
```
"""
WSGI config for app project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/howto/deployment/wsgi/
"""
import os
import sys
sys.path.insert(0, '/home/recprydjango/recipe/')
sys.path.insert(0, '/home/recprydjango/recipe/app/')
sys.path.insert(0, '/home/recprydjango/recipe/recipe/')
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app.settings')
application = get_wsgi_application()
```
apache conf
```
<VirtualHost *:80>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
Alias /static /home/recprydjango/recipe/static
<Directory /home/recprydjango/recipe/static>
Require all granted
</Directory>
Alias /media /home/recprydjango/recipe/media
<Directory /home/recprydjango/recipe/media>
Require all granted
</Directory>
<Directory /home/recprydjango/recipe/app>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIScriptAlias / /home/recprydjango/recipe/app/wsgi.py
WSGIDaemonProcess django_app python-path=/home/recprydjango/recipe python-home=/home/recprydjango/recipe/venv
WSGIProcessGroup django_app
</VirtualHost>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
``` | 2020/02/23 | [
"https://Stackoverflow.com/questions/60358982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10881955/"
] | UPD: Now I'm sure the reason of such behavior is "AdBlock Plus" Chrome Extension (ID: cfhdojbkjhnklbpkdaibdccddilifddb).
I think the refresh started to happen after the extension's update. When I open DevTools in Chrome Incognito mode AdBlock is disabled and I get no refresh, also there's no refresh on another PC I use with no AdBlock. | I have found that some extensions cause page refreshes, such as "Awesome Color Picker" | 771 |
54,390,224 | My question is why can I not use a relative path to specify a bash script to run?
I have a ansible file structure following [best practice](https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html#directory-layout).
My directory structure for this role is:
```
.
├── files
│ └── install-watchman.bash
└── tasks
└── main.yml
```
and the main.yml includes this:
```
- name: install Watchman
shell: "{{ role_path }}/files/install-watchman.bash"
- name: copy from files dir to target home dir
copy:
src: files/install-watchman.bash
dest: /home/vagrant/install-watchman.bash
owner: vagrant
group: vagrant
mode: 0744
- name: install Watchman
shell: files/install-watchman.bash
```
I would expect all three commands to work, but in practice the third one fails:
```
TASK [nodejs : install Watchman] ***********************************************
changed: [machine1]
TASK [nodejs : copy from files dir to target home dir] ********
changed: [machine1]
TASK [nodejs : install Watchman] ***********************************************
fatal: [machine1]: FAILED! => {"changed": true, "cmd": "files/install-watchman.bash", "delta": "0:00:00.002997", "end": "2019-01-27 16:01:50.093530", "msg": "non-zero return code", "rc": 127, "start": "2019-01-27 16:01:50.090533", "stderr": "/bin/sh: 1: files/install-watchman.bash: not found", "stderr_lines": ["/bin/sh: 1: files/install-watchman.bash: not found"], "stdout": "", "stdout_lines": []}
to retry, use: --limit @/vagrant/ansible/site.retry
```
(If it helps, this is the version info for ansible:)
```
vagrant@ubuntu-xenial:~$ ansible --version
ansible 2.7.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/vagrant/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
``` | 2019/01/27 | [
"https://Stackoverflow.com/questions/54390224",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10055448/"
] | I put together a test of this to see: <https://github.com/farrellit/ansible-demonstrations/tree/master/shell-cwd>
It has convinced me that the short answer is probably, *ansible roles' `shell` tasks will by default have the working directory of the playbook that include that role*.
It basically comes down to a role like this ( the rest of that dir is tooling to make it run):
```
- shell: pwd
register: shellout
- debug: var=shellout.stdout
- shell: pwd
args:
chdir: "{{role_path}}"
register: shellout2
- debug: var=shellout2.stdout
```
This has shown:
```
PLAY [localhost] ***********************************************************************************************************************************************************************************************************************
TASK [shelldir : command] **************************************************************************************************************************************************************************************************************
changed: [127.0.0.1]
TASK [shelldir : debug] ****************************************************************************************************************************************************************************************************************
ok: [127.0.0.1] => {
"shellout.stdout": "/code"
}
TASK [shelldir : command] **************************************************************************************************************************************************************************************************************
changed: [127.0.0.1]
TASK [shelldir : debug] ****************************************************************************************************************************************************************************************************************
ok: [127.0.0.1] => {
"shellout2.stdout": "/code/roles/shelldir"
}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
127.0.0.1 : ok=4 changed=2 unreachable=0 failed=0
```
That the current working directory for roles is not `role_path`. In my case it is the role of the playbook that invoked the task. It might be something else in the case of an included playbook or tasks file from a different directory (I'll leave that as an exercise for you, if you care). I set that execution to run from `/tmp`, so I don't think it would matter the current working directory of the shell that ran `ansible-playbook`. | Shell will execute the command on the remote. You have copied the script to `/home/vagrant/install-watchman.bash` on your remote. Therefore you have to use that location for executing on the remote as well.
```
- name: install Watchman
shell: /home/vagrant/install-watchman.bash
```
a relative path will work as well, if your ansible user is the user "vagrant"
```
- name: install Watchman
shell: install-watchman.bash
```
Side note:
I would recommend to use `command` instead of `shell` whenever possible: [shell vs command Module](https://blog.confirm.ch/ansible-modules-shell-vs-command/) | 772 |
68,759,605 | >
> {"name": "Sara", "grade": "1", "school": "Buckeye", "teacher": "Ms. Black", "sci": {"gr": "A", "perc": "93"}, "math": {"gr": "B+", "perc": "88"}, "eng": {"gr": "A-", "perc": "91"}}
>
>
>
I have the json file above (named test) and I am trying to turn it into a dataframe in python using pandas. The pd.read\_json(test.csv) command returns two lines 'gr' and 'perc' instead of one. Is there a way to make one row and the nested columns be gr.sci, gr.math, gr.eng, perc.sci, perc.math, perc.eng? | 2021/08/12 | [
"https://Stackoverflow.com/questions/68759605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10256905/"
] | Try with `pd.json_normalize()`, as follows:
```
df = pd.json_normalize(test)
```
**Result:**
```
print(df)
name grade school teacher sci.gr sci.perc math.gr math.perc eng.gr eng.perc
0 Sara 1 Buckeye Ms. Black A 93 B+ 88 A- 91
``` | Use `pd.json_normalize` after convert json file to python data structure:
```
import pandas as pd
import json
data = json.load('data.json')
df = pd.json_normalize(data)
```
```
>>> df
name grade school teacher sci.gr sci.perc math.gr math.perc eng.gr eng.perc
0 Sara 1 Buckeye Ms. Black A 93 B+ 88 A- 91
``` | 773 |
64,609,700 | I have a script that imports another script, like this:
```
from mp_utils import *
login_response = login(...)
r = incomingConfig(...)
```
and mp\_utils.py is like this:
```
import requests
import logging
from requests.exceptions import HTTPError
def login( ... ):
...
def incomingConfig( ... ):
...
```
When running it, `login` works fine, but `incomingConfig` fails with:
```
Message: 'module' object has no attribute 'incomingConfig'
Exception: None
```
No idea why, any ideas?
funny thing is if import with python cli interactive, it works fine.
Many thanks! | 2020/10/30 | [
"https://Stackoverflow.com/questions/64609700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2115947/"
] | `from x import *` imports everything so that you don't have to name the module before you call a function. Try removing `mp_utils` from your function calls. | It is importing all the functions correctly, when you import using `from` then you don't have to prefix `mp_utils` to call the functions, just you can call it by their name. To call `mp_utils` prefixed, use `import mp_utils` instead. | 774 |
5,082,697 | I have created with the "extra" clause a concatenated field out of three text fields in a model - and I expect to be able to do this: q.filter(concatenated\_\_icontains="y") but it gives me an error. What alternatives are there?
```
>>> q = Patient.objects.extra(select={'concatenated': "mrn||' '||first_name||' '||last_name"})
>>> q.filter(concatenated__icontains="y")
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/django/db/models/query.py", line 561, in filter
return self._filter_or_exclude(False, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/db/models/query.py", line 579, in _filter_or_exclude
clone.query.add_q(Q(*args, **kwargs))
File "/usr/lib/python2.7/site-packages/django/db/models/sql/query.py", line 1170, in add_q
can_reuse=used_aliases, force_having=force_having)
File "/usr/lib/python2.7/site-packages/django/db/models/sql/query.py", line 1058, in add_filter
negate=negate, process_extras=process_extras)
File "/usr/lib/python2.7/site-packages/django/db/models/sql/query.py", line 1237, in setup_joins
"Choices are: %s" % (name, ", ".join(names)))
FieldError: Cannot resolve keyword 'concatenated' into field. Choices are: first_name, id, last_name, mrn, specimen
``` | 2011/02/22 | [
"https://Stackoverflow.com/questions/5082697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/443404/"
] | If you need something beyond this,
```
Patient.objects.filter(first_name__icointains='y' | last_name__icontains='y' | mrn__icontains='y')
```
you might have to resort to raw SQL.
Of course, you can add in your `extra` either before or after the filter above. | My final solution based on Prasad's answer:
```
from django.db.models import Q
searchterm='y'
Patient.objects.filter(Q(mrn__icontains=searchterm) | Q(first_name__icontains=searchterm) | Q(last_name__icontains=searchterm))
``` | 776 |
54,434,766 | I have to define Instance variable, This Instance Variable is accessed in different Instance methods. Hence I am setting up Instance Variable under constructor. I see best of Initializing instance variables under constructor.
Is it a Good practice to use if else condition under constructor to define instance variable. Is there any other pythonic way to achieve this in standard coding practice.
```
class Test:
def __init__(self, EmpName, Team):
self.EmpName = EmpName
self.Team = Team
if Team == "Dev":
self.Manager = "Bob"
elif Team == "QA":
self.Manager = "Kim"
elif Team == "Admin":
self.Manager == "Jeff"
``` | 2019/01/30 | [
"https://Stackoverflow.com/questions/54434766",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10966964/"
] | The relationship between teams and managers is very straightforward data; I would not like having it as code. Thus, a lookup dictionary would be my choice.
```
class Test:
TEAM_MANAGERS = {
"Dev": "Bob",
"QA": "Kim",
"Admin": "Jeff",
}
def __init__(self, emp_name, team):
self.emp_name = emp_name
self.team = team
try:
self.manager = TEAM_MANAGERS[team]
except KeyError:
raise ArgumentError("Unknown team")
``` | There is nothing wrong with using `if-else` inside the `__init__()` method.
Based upon the condition you want the specific variable to be initialized, this is appropriate. | 777 |
17,297,230 | I am new to python and have tried searching for help prior to posting.
I have binary file that contains a number of values I need to parse. Each value has a hex header of two bytes and a third byte that gives a size of the data in that record to parse. The following is an example:
```
\x76\x12\x0A\x08\x00\x00\x00\x00\x00\x00\x00\x00
```
The `\x76\x12` is the record marker and `\x0A` is the number of bytes to be read next.
This data always has the two byte marker and a third byte size. However the data to be parsed is variable and the record marker increments as follows: `\x76\x12` and `\x77\x12` and so on until `\x79\x12` where is starts again.
This is just example data for the use of this posting.
Many Thanks for any help or pointers. | 2013/06/25 | [
"https://Stackoverflow.com/questions/17297230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2519943/"
] | Is something like this what you want?
```
>>> b = b'\x76\x12\x0A\x08\x00\x00\x00\x00\x00\x00\x00\x00'
>>> from StringIO import StringIO
>>> io = StringIO(b)
>>> io.seek(0)
>>> io.read(2) #read 2 bytes, maybe validate?
'v\x12'
>>> import struct
>>> nbytes = struct.unpack('B',io.read(1))
>>> print nbytes
(10,)
>>> data = io.read(nbytes[0])
>>> data
'\x08\x00\x00\x00\x00\x00\x00\x00\x00'
``` | This will treat the data as a raw string (to ignore '\' escape character and split into a list
```
a = r"\x76\x12\x0A\x08\x00\x00\x00\x00\x00\x00\x00\x00".split('\\')
print a
```
output: ['', 'x76', 'x12', 'x0A', 'x08', 'x00', 'x00', 'x00', 'x00', 'x00', 'x00', 'x00', 'x00']
You can then iterate through the values you are interested in and convert them to decimal if required:
```
for i in range(len(a[4:])): # cutting off records before index 4 here
print int(str(a[i+4][1:]),16)
``` | 782 |
68,840,058 | I would like to show the data of a hdf5 file in the ImageView() class from pyqtgraph. The bare code of displaying the plot for ImageView() is:
```
from pyqtgraph.Qt import QtCore, QtGui
import pyqtgraph as pg
# Interpret image data as row-major instead of col-major
pg.setConfigOptions(leftButtonPan = False, imageAxisOrder='row-major')
app = QtGui.QApplication([])
## Create window with ImageView widget
win = QtGui.QMainWindow()
win.resize(800,800)
imv = pg.ImageView()
win.setCentralWidget(imv)
win.show()
win.setWindowTitle('pyqtgraph example: ImageView')
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
```
There is however also a hdf5 example in the pyqtgraph example set. I'm unfortunately not able to get it to work. I made some alterations to the example to make it work for my needs but I'm getting an error. Here is first the code:
```
import numpy as np
import h5py
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore, QtGui
pg.mkQApp()
plt = pg.plot()
plt.setWindowTitle('pyqtgraph example: HDF5 big data')
plt.enableAutoRange(False, False)
plt.setXRange(0, 500)
class HDF5Plot(pg.ImageItem):
def __init__(self, *args, **kwds):
self.hdf5 = None
self.limit = 10000 # maximum number of samples to be plotted
pg.ImageItem.__init__(self, *args, **kwds)
def setHDF5(self, data):
self.hdf5 = data
self.updateHDF5Plot()
def viewRangeChanged(self):
self.updateHDF5Plot()
def updateHDF5Plot(self):
if self.hdf5 is None:
self.setData([])
return
vb = self.getViewBox()
if vb is None:
return # no ViewBox yet
# Determine what data range must be read from HDF5
xrange = vb.viewRange()[0]
start = max(0, int(xrange[0]) - 1)
stop = min(len(self.hdf5), int(xrange[1] + 2))
# Decide by how much we should downsample
ds = int((stop - start) / self.limit) + 1
if ds == 1:
# Small enough to display with no intervention.
visible = self.hdf5[start:stop]
scale = 1
else:
# Here convert data into a down-sampled array suitable for visualizing.
# Must do this piecewise to limit memory usage.
samples = 1 + ((stop - start) // ds)
visible = np.zeros(samples * 2, dtype=self.hdf5.dtype)
sourcePtr = start
targetPtr = 0
# read data in chunks of ~1M samples
chunkSize = (1000000 // ds) * ds
while sourcePtr < stop - 1:
chunk = self.hdf5[sourcePtr:min(stop, sourcePtr + chunkSize)]
sourcePtr += len(chunk)
# reshape chunk to be integral multiple of ds
chunk = chunk[:(len(chunk) // ds) * ds].reshape(len(chunk) // ds, ds)
# compute max and min
chunkMax = chunk.max(axis=1)
chunkMin = chunk.min(axis=1)
# interleave min and max into plot data to preserve envelope shape
visible[targetPtr:targetPtr + chunk.shape[0] * 2:2] = chunkMin
visible[1 + targetPtr:1 + targetPtr + chunk.shape[0] * 2:2] = chunkMax
targetPtr += chunk.shape[0] * 2
visible = visible[:targetPtr]
scale = ds * 0.5
self.setData(visible) # update the plot
self.setPos(start, 0) # shift to match starting index
self.resetTransform()
self.scale(scale, 1) # scale to match downsampling
f = h5py.File('test.hdf5', 'r')
curve = HDF5Plot()
curve.setHDF5(f['data'])
plt.addItem(curve)
## Start Qt event loop unless running in interactive mode or using pyside.
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
```
And here is the error:
```
Traceback (most recent call last):
File "pyqtg.py", line 206, in <module>
curve.setHDF5(f['data'])
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/home/anaconda3/envs/img/lib/python3.8/site-packages/h5py-3.3.0-py3.8-linux-x86_64.egg/h5py/_hl/group.py", line 305, in __getitem__
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 190, in h5py.h5o.open
KeyError: "Unable to open object (object 'data' doesn't exist)"
```
The problem is that I don't know what/how the hdf5 file looks so I am unsure how to replace 'data' with the correct term or if it is completely different in and of itself. Any help is greatly appreciated.
**Edit 1:**
I got the examples from running `python -m pyqtgraph.examples`. Once the GUI pops up down the list you'll see "HDF5 Big Data". My code stems from that example. And from the examples the third one from the top, ImageView, is the code I would like to use to show the HDF5 file.
**Edit 2:**
Here is the result of running the second part of the code kcw78:
<http://pastie.org/p/3scRyUm1ZFVJNMwTHQHCBv>
**Edit 3:**
So I ran the code above but made a small change with the help from kcw78. I changed:
```
f = h5py.File('test.hdf5', 'r')
curve = HDF5Plot()
curve.setHDF5(f['data'])
plt.addItem(curve)
```
to:
```
with h5py.File('test.hdf5', 'r') as h5f:
curve = HDF5Plot()
curve.setHDF5(h5f['aggea'])
plt.addItem(curve)
```
And got the errors:
```
Traceback (most recent call last):
File "/home/anaconda3/envs/img/lib/python3.8/site-packages/pyqtgraph/graphicsItems/GraphicsObject.py", line 23, in itemChange
self.parentChanged()
File "/home/anaconda3/envs/img/lib/python3.8/site-packages/pyqtgraph/graphicsItems/GraphicsItem.py", line 458, in parentChanged
self._updateView()
File "/home/anaconda3/envs/img/lib/python3.8/site-packages/pyqtgraph/graphicsItems/GraphicsItem.py", line 514, in _updateView
self.viewRangeChanged()
File "pyqtg.py", line 25, in viewRangeChanged
self.updateHDF5Plot()
File "pyqtg.py", line 77, in updateHDF5Plot
self.setData(visible) # update the plot
TypeError: setData(self, int, Any): argument 1 has unexpected type 'numpy.ndarray'
Traceback (most recent call last):
File "/home/anaconda3/envs/img/lib/python3.8/site-packages/pyqtgraph/graphicsItems/GraphicsObject.py", line 23, in itemChange
self.parentChanged()
File "/home/anaconda3/envs/img/lib/python3.8/site-packages/pyqtgraph/graphicsItems/GraphicsItem.py", line 458, in parentChanged
self._updateView()
File "/home/anaconda3/envs/img/lib/python3.8/site-packages/pyqtgraph/graphicsItems/GraphicsItem.py", line 514, in _updateView
self.viewRangeChanged()
File "pyqtg.py", line 25, in viewRangeChanged
self.updateHDF5Plot()
File "pyqtg.py", line 77, in updateHDF5Plot
self.setData(visible) # update the plot
TypeError: setData(self, int, Any): argument 1 has unexpected type 'numpy.ndarray'
Traceback (most recent call last):
File "pyqtg.py", line 25, in viewRangeChanged
self.updateHDF5Plot()
File "pyqtg.py", line 77, in updateHDF5Plot
self.setData(visible) # update the plot
TypeError: setData(self, int, Any): argument 1 has unexpected type 'numpy.ndarray'
```
**Edit 4:**
Here is a photo of the results: <https://imgur.com/a/tVHNdx9>. I get the same empty results from both creating a 2d hdf5 file and using my 2d data file.
```
with h5py.File('mytest.hdf5', 'r') as h5fr, \
h5py.File('test_1d.hdf5', 'w') as h5fw:
arr = h5fr['aggea'][:].reshape(-1,)
h5fw.create_dataset('data', data=arr)
print(h5fw['data'].shape, h5fw['data'].dtype)
```
**Edit 5: The code that runs and plots**
```
import sys, os
import numpy as np
import h5py
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore, QtGui
pg.mkQApp()
plt = pg.plot()
plt.setWindowTitle('pyqtgraph example: HDF5 big data')
plt.enableAutoRange(False, False)
plt.setXRange(0, 500)
class HDF5Plot(pg.PlotCurveItem):
def __init__(self, *args, **kwds):
self.hdf5 = None
self.limit = 10000 # maximum number of samples to be plotted
pg.PlotCurveItem.__init__(self, *args, **kwds)
def setHDF5(self, data):
self.hdf5 = data
self.updateHDF5Plot()
def viewRangeChanged(self):
self.updateHDF5Plot()
def updateHDF5Plot(self):
if self.hdf5 is None:
self.setData([])
return
vb = self.getViewBox()
if vb is None:
return # no ViewBox yet
# Determine what data range must be read from HDF5
xrange = vb.viewRange()[0]
start = max(0, int(xrange[0]) - 1)
stop = min(len(self.hdf5), int(xrange[1] + 2))
# Decide by how much we should downsample
ds = int((stop - start) / self.limit) + 1
if ds == 1:
# Small enough to display with no intervention.
visible = self.hdf5[start:stop]
scale = 1
else:
# Here convert data into a down-sampled array suitable for visualizing.
# Must do this piecewise to limit memory usage.
samples = 1 + ((stop - start) // ds)
visible = np.zeros(samples * 2, dtype=self.hdf5.dtype)
sourcePtr = start
targetPtr = 0
# read data in chunks of ~1M samples
chunkSize = (1000000 // ds) * ds
while sourcePtr < stop - 1:
chunk = self.hdf5[sourcePtr:min(stop, sourcePtr + chunkSize)]
sourcePtr += len(chunk)
# reshape chunk to be integral multiple of ds
chunk = chunk[:(len(chunk) // ds) * ds].reshape(len(chunk) // ds, ds)
# compute max and min
chunkMax = chunk.max(axis=1)
chunkMin = chunk.min(axis=1)
# interleave min and max into plot data to preserve envelope shape
visible[targetPtr:targetPtr + chunk.shape[0] * 2:2] = chunkMin
visible[1 + targetPtr:1 + targetPtr + chunk.shape[0] * 2:2] = chunkMax
targetPtr += chunk.shape[0] * 2
visible = visible[:targetPtr]
scale = ds * 0.5
self.setData(visible) # update the plot
self.setPos(start, 0) # shift to match starting index
self.resetTransform()
self.scale(scale, 1) # scale to match downsampling
with h5py.File('mytest.hdf5', 'r') as h5fr, \
h5py.File('test_1d.hdf5', 'w') as h5fw:
arr = h5fr['aggea'][:].reshape(-1,)
h5fw.create_dataset('data', data=arr)
curve = HDF5Plot()
curve.setHDF5(h5fw['data'])
plt.addItem(curve)
## Start Qt event loop unless running in interactive mode or using pyside.
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
```
**Edit 6:**
What worked in the end:
```
from pyqtgraph.Qt import QtGui, QtCore
import numpy as np
import h5py
import pyqtgraph as pg
import matplotlib.pyplot as plt
app = QtGui.QApplication([])
win = QtGui.QMainWindow()
win.resize(800,800)
imv = pg.ImageView()
win.setCentralWidget(imv)
win.show()
win.setWindowTitle('pyqtgraph example: ImageView')
with h5py.File('test.hdf5', 'r') as h5fr:
data = h5fr.get('aggea')[()] #this gets the values. You can also use hf.get('dataset_name').value as this gives insight what `[()]` is doing, though it's deprecated
imv.setImage(data)
# hf = h5py.File('test.hdf5', 'r')
# n1 = np.array(hf['/pathtodata'][:])
# print(n1.shape)
## Set a custom color map
colors = [
(0, 0, 0),
(45, 5, 61),
(84, 42, 55),
(150, 87, 60),
(208, 171, 141),
(255, 255, 255)
]
cmap = pg.ColorMap(pos=np.linspace(0.0, 1.0, 6), color=colors)
imv.setColorMap(cmap)
## Start Qt event loop unless running in interactive mode.
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
``` | 2021/08/18 | [
"https://Stackoverflow.com/questions/68840058",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8313547/"
] | This should help with the second part.
```r
df %>%
group_by(species, native_region, intro_region) %>%
filter(all(c(1,0) %in% invasive)) %>%
filter(., invasive == 0) %>%
anti_join(df, .)
# A tibble: 11 × 4
species native_region invasive intro_region
<chr> <dbl> <dbl> <dbl>
1 Abies 4 1 8
2 Abies 4 1 3
3 Abies 4 1 4
4 Abies 3 1 5
5 Abrus 2 0 3
6 Abrus 2 0 6
7 Abrus 2 1 7
8 Acacia 4 1 5
9 Acacia 5 0 7
10 Acacia 6 1 7
11 Acacia 8 1 7
'''
``` | Figured out the first part:
```r
df %>% select(species, native_region) -> x
df %>% group_by(species) %>% anti_join(., x, by = c("species", "intro_region" = "native_region"))
# A tibble: 9 x 4
# Groups: species [3]
species native_region invasive intro_region
<chr> <dbl> <dbl> <dbl>
1 Abies 4 1 8
2 Abies 3 1 5
3 Abrus 2 0 3
4 Abrus 2 0 6
5 Abrus 2 1 7
6 Acacia 5 0 7
7 Acacia 6 1 7
8 Acacia 8 0 7
9 Acacia 8 1 7
```
Boom.
I'll still take suggestions on the second part. Thanks to everyone. | 783 |
17,786,203 | Consider the array `a= [1, 2, 3, 1, 2, 3]`. Now suppose I want to remove all the 2s in this array in python. So I apply `a.remove(2)`. However the result which comes out is `[1, 3, 1, 2, 3]`, i.e the only first 2 is removed. How can I remove all the 2s which appear in an array? In general, given an array and an element p, how can I remove all the elements of the array which are equal to p?
Edit:- I think I should mention this, this question has been inspired from a Brilliant computer science problem. | 2013/07/22 | [
"https://Stackoverflow.com/questions/17786203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2517072/"
] | Use a list comprehension to build a replacement list, where all elements are *not* equal to `p`:
```
a = [i for i in a if i != p]
```
Note that in Python, the datatype is called a `list`, not an array. | You can use [`filter()`](http://docs.python.org/2/library/functions.html#filter).
```
>>> a= [1, 2, 3, 1, 2, 3]
>>> filter(lambda x: x != 2, a)
[1, 3, 1, 3]
```
In a function :
```
>>> def removeAll(inList, num):
return filter(lambda elem: elem != num, inList)
>>> removeAll(a, 2)
[1, 3, 1, 3]
``` | 784 |
9,845,354 | I'm having some problems with a piece of python work. I have to write a piece of code that is run through CMD. I need it to then open a file the user states and count the number of each alphabetical characters it contains.
So far I have this, which I can run through CDM, and state a file to open. I've messed around with regular expressions, still can't figure out how to count individual characters. Any ideas? sorry if I explained this badly.
```
import sys
import re
filename = raw_input()
count = 0
datafile=open(filename, 'r')
``` | 2012/03/23 | [
"https://Stackoverflow.com/questions/9845354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1289022/"
] | The Counter type is useful for counting items. It was added in python 2.7:
```
import collections
counts = collections.Counter()
for line in datafile:
# remove the EOL and iterate over each character
#if you desire the counts to be case insensitive, replace line.rstrip() with line.rstrip().lower()
for c in line.rstrip():
# Missing items default to 0, so there is no special code for new characters
counts[c] += 1
```
To see the results:
```
results = [(key, value) for key, value in counts.items() if key.isalpha()]
print results
``` | If you want to use regular expressions, you can do as follows:
```
pattern = re.compile('[^a-zA-Z]+') # pattern for everything but letters
only_letters = pattern.sub(text, '') # delete everything else
count = len(only_letters) # total number of letters
```
For counting the number of distinct characters, use Counter as already adviced. | 785 |
57,045,356 | This is a problem given in ***HackWithInfy2019*** in hackerrank.
I am stuck with this problem since yesterday.
Question:
---------
You are given array of N integers.You have to find a pair **(i,j)**
which **maximizes** the value of **GCD(`a[i],a[j]`)+(`j - i`)**
and 1<=i< j<=n
Constraints are:
----------------
2<= **N** <= 10^5
1<= **a[i]** <= 10^5
I've tried this problem using python | 2019/07/15 | [
"https://Stackoverflow.com/questions/57045356",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11669081/"
] | Here is an approach that could work:
```
result = 0
min_i = array[1 ... 100000] initialized to 0
for j in [1, 2, ..., n]
for d in divisors of a[j]
let i = min_i[d]
if i > 0
result = max(result, d + j - i)
else
min_i[d] = j
```
Here, `min_i[d]` for each `d` is the smallest `i` such that `a[i] % d == 0`. We use this in the inner loop to, for each `d`, find the first element in the array whose GCD with `a[j]` is at least `d`. When `j` is one of the possible values for which `gcd(a[i], a[j]) + j - i` is maximal, when the inner loop runs with `d` equal to the required GCD, `result` will be set to the correct answer.
The maximum possible number of divisors for a natural number less than or equal to 100,000 is 128 (see [here](https://codeforces.com/blog/entry/14463)). Therefore the inner loop runs at most 128 \* 100,000 = 12.8 million times. I imagine this could pass with some optimizations (although maybe not in Python).
(To iterate over divisors, use a sieve to precompute the smallest nontrivial divisor for each integer from 1 to 100000.) | Here is one way of doing it.
Create a mutable class `MinMax` for storing the min. and max. index.
Create a `Map<Integer, MinMax>` for storing the min. and max. index for a particular divisor.
For each value in `a`, find all divisors for `a[i]`, and update the map accordingly, such that the `MinMax` object stores the min. and max. `i` of the number with that particular divisor.
When done, iterate the map and find the entry with largest result of calculating `key + value.max - value.min`.
The min. and max. values of that entry is your answer. | 791 |
44,794,782 | I am in the process of downloading data from firebase, exporting it into a json. After this I am trying to upload it into bigquery but I need to remove the new line feed for big query to accept it.
```
{ "ConnectionTime": 730669.644775033,
"objectId": "eHFvTUNqTR",
"CustomName": "Relay Controller",
"FirmwareRevision": "FW V1.96",
"DeviceID": "F1E4746E-DCEC-495B-AC75-1DFD66527561",
"PeripheralType": 9,
"updatedAt": "2016-12-13T15:50:41.626Z",
"Model": "DF Bluno",
"HardwareRevision": "HW V1.7",
"Serial": "0123456789",
"createdAt": "2016-12-13T15:50:41.626Z",
"Manufacturer": "DFRobot"}
{
"ConnectionTime": 702937.7616419792,
"objectId": "uYuT3zgyez",
"CustomName": "Relay Controller",
"FirmwareRevision": "FW V1.96",
"DeviceID": "F1E4746E-DCEC-495B-AC75-1DFD66527561",
"PeripheralType": 9,
"updatedAt": "2016-12-13T08:08:29.829Z",
"Model": "DF Bluno",
"HardwareRevision": "HW V1.7",
"Serial": "0123456789",
"createdAt": "2016-12-13T08:08:29.829Z",
"Manufacturer": "DFRobot"}
```
This is how I need it but can not figure out how to do this besides manually doing it.
```
{"ConnectionTime": 730669.644775033,"objectId": "eHFvTUNqTR","CustomName": "Relay Controller","FirmwareRevision": "FW V1.96","DeviceID": "F1E4746E-DCEC-495B-AC75-1DFD66527561","PeripheralType": 9,"updatedAt": "2016-12-13T15:50:41.626Z","Model": "DF Bluno","HardwareRevision": "HW V1.7","Serial": "0123456789","createdAt": "2016-12-13T15:50:41.626Z","Manufacturer": "DFRobot"}
{"ConnectionTime": 702937.7616419792, "objectId": "uYuT3zgyez", "CustomName": "Relay Controller", "FirmwareRevision": "FW V1.96", "DeviceID": "F1E4746E-DCEC-495B-AC75-1DFD66527561", "PeripheralType": 9, "updatedAt": "2016-12-13T08:08:29.829Z", "Model": "DF Bluno", "HardwareRevision": "HW V1.7", "Serial": "0123456789", "createdAt": "2016-12-13T08:08:29.829Z", "Manufacturer": "DFRobot"}
```
I am using python to load the json, read it and then write a new one but can not figure out the right code. Thank you!
here is the outline for my python code
```
import json
with open('nospacetest.json', 'r') as f:
data_json=json.load(f)
#b= the file after code for no line breaks is added
with open('testnoline.json', 'w') as outfile:
json.dump=(b, outfile)
``` | 2017/06/28 | [
"https://Stackoverflow.com/questions/44794782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8192249/"
] | Reading between the lines, I think the input format might be a single JSON array, and the desired output is newline-separated JSON representations of the elements of that array. If so, this is probably all that's needed:
```
with open('testnoline.json', 'w') as outfile:
for obj in data_json:
outfile.write(json.dumps(obj) + "\n")
``` | You only need to make sure that `indent=None` when you [`dump`](https://docs.python.org/2/library/json.html#basic-usage) you data to json:
```
with open('testnoline.json', 'w') as outfile:
json.dump(data_json, outfile, indent=None)
```
Quoting from the doc:
>
> If `indent` is a non-negative integer, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0, or negative, will only insert newlines. `None` (the default) selects the most compact representation.
>
>
> | 792 |
42,281,484 | I am attempting to measure the period of time from when a user submits a PHP form to when they submit again. The form's action is the same page so effectively it's just a refresh. Moreover, the user may input the same data again. I need it so that it begins counting before the page refreshes as the result must be as accurate as possible. I have already tried a number of methods but none of which resulted in any success. I have simplified the following code:
**HTML Form**
```
<form method="GET" action="index.php">
<input id="motion_image" type="image" value="1" name="x" src="img/btn1.png">
<input id="motion_image" type="image" value="2" name="x" src="img/btn2.png">
</form>
```
Ultimately, I need to have a PHP or JavaScript variable of how long it took a user to press either one of these two buttons to when they again press either one of them. It is important that the counter begins before the refresh as the variable needs to be as accurate as possible. Furthermore, it needs to be responsive so that after say 5 seconds it triggers an event (e.g. a JavaScript alert). I did not feel it was necessary to include my previous attempts as they were all unsuccessful and I believe there is probably a better way. I have full access to the server the site is being hosted on so running a python sub script and exchanging variables using JSON or any other similar solutions are entirely possible.
Apologies for any mistakes and my general lack of stack overflow skills :)
Thanks | 2017/02/16 | [
"https://Stackoverflow.com/questions/42281484",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You could try a simple extension. Here's an example:
```
extension UIImageView {
func render(with radius: CGFloat) {
// add the shadow to the base view
self.backgroundColor = UIColor.clear
self.layer.shadowColor = UIColor.black.cgColor
self.layer.shadowOffset = CGSize(width: 0, height: 10)
self.layer.shadowOpacity = 0.3
self.layer.shadowRadius = 10
// add a border view for corners
let borderView = UIView()
borderView.frame = self.bounds
borderView.layer.cornerRadius = radius
borderView.layer.masksToBounds = true
self.addSubview(borderView)
// add the image for clipping
let subImageView = UIImageView()
subImageView.image = self.image
subImageView.frame = borderView.bounds
borderView.addSubview(subImageView)
//for performance
self.layer.shadowPath = UIBezierPath(roundedRect: self.bounds, cornerRadius: 10).cgPath
self.layer.shouldRasterize = true
self.layer.rasterizationScale = UIScreen.main.scale
}
}
```
Usage:
```
myImageView.image = image
myImageView.render(with: 10)
```
Obviously you can add as many parameters as you want to the extension, set defaults, break it into separate methods, etc.
Result:
[![enter image description here](https://i.stack.imgur.com/LNoqh.png)](https://i.stack.imgur.com/LNoqh.png) | You can just add the image in and give it a few attributes to make it round.
When you have the UImage selected click on the attributes tab and click on the '+' and type in
```
layer.cornerRadius
```
And change it to a number instead of a string. All number 1-50 work. If you want a perfect circle then type in 50. | 793 |
37,083,591 | I've been creating a studying program for learning japanese using python and tried condensing and randomizing it butnow it doesnt do the input,i have analyzed it multiple times and cant find any reason here is what i have for it so far,any suggestions would be appreciate
```
import sys
import random
start = input("Are you ready to practice Japanese Lesson 1? ")
if start.lower() =="yes":
print("Ok Let's Begin")
questiontimer = 0
while questiontimer<10:
questiontimer = (int(questiontimer) + 1)
WordList = ["konnichiwa"]
rand_word = random.choice(WordList)
if rand_word == "konnichiwa":
answer = input("Question "+ questiontimer +":Say hello In japanese.")
if rand_word == answer.lower():
print("Correct!")
elif randword!= answer.lower():
print("Incorrect, the answer is Konnichiwa")
```
this is as condensed as i could get it to reproduce the problem after
```
print("Ok Let's Begin")
```
it is supposed to pick a random string from the list then ask for input based on which word it is right now it has only one string in the list but still does not print what is in the input or allow input for the answer | 2016/05/07 | [
"https://Stackoverflow.com/questions/37083591",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6302586/"
] | You can encrypt your parameters string and then send it as a message
>
> Encrypted URL form:
>
>
>
```
myAppName://encrypted_query
```
Now when you get a call in your app, you should fetch the `encryptedt_data` out of the URL and should decrypt it before actually doing anything.
>
> Decrypted URL form:
>
>
>
```
myAppName://someQuery?blablabla=123
```
In my believe this is the best and easiest way to get this done. For encryption/decryption best practice check this, [AES Encryption for an NSString on the iPhone](https://stackoverflow.com/questions/1400246/aes-encryption-for-an-nsstring-on-the-iphone) and [this](https://github.com/dev5tec/FBEncryptor).
As long as you're not concern about the security, you can always use reduced size of encryption string to make a URL smaller. That option is given in Github library. | >
> So the best way to go about this is to insert the URL scheme `myAppName://someQuery?blablabla=123` and that should in turn fire the `openURL` command and open that specific view.
>
>
>
I'm assuming you're using a web view and that's why you want to handle things this way. But are you aware of the `WKScriptMessageHandler` protocol in the new `WKWebView` class?
If you embed `onclick='window.webkit.messageHandlers.yourHandlerName.postMessage(yourUserData)'` on the web side, and setup one or more script message handlers through the `WKUserContentController` of your `WKWebView`, their `-userContentController:didReceiveScriptMessage:` methods will be called with `yourUserData` as the message body. | 796 |
33,362,977 | i got a program which needs to send a byte array via a serial communication. And I got no clue how one can make such a thing in python.
I found a c/c++/java function which creates the needed byte array:
```
byte[] floatArrayToByteArray(float[] input)
{
int len = 4*input.length;
int index=0;
byte[] b = new byte[4];
byte[] out = new byte[len];
ByteBuffer buf = ByteBuffer.wrap(b);
for(int i=0;i<input.length;i++)
{
buf.position(0);
buf.putFloat(input[i]);
for(int j=0;j<4;j++) out[j+i*4]=b[3-j];
}
return out;
}
```
but how can I translate that to python code.
edit: the serial data is send to a device. where I can not change the firmware.
thanks | 2015/10/27 | [
"https://Stackoverflow.com/questions/33362977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2206668/"
] | Put your data to array (here are [0,1,2] ), and send with: serial.write(). I assume you've properly opened serial port.
```
>> import array
>> tmp = array.array('B', [0x00, 0x01, 0x02]).tostring()
>> ser.write(tmp.encode())
```
Ansvered using: [Binary data with pyserial(python serial port)](https://stackoverflow.com/questions/472977/binary-data-with-pyserialpython-serial-port)
and this:[pySerial write() won't take my string](https://stackoverflow.com/questions/22275079/pyserial-write-wont-take-my-string) | It depends on if you are sending a signed or unsigned and other parameters. There is a bunch of documentation on this. This is an example I have used in the past.
```
x1= 0x04
x2 = 0x03
x3 = 0x02
x4 = x1+ x2+x3
input_array = [x1, x2, x3, x4]
write_bytes = struct.pack('<' + 'B' * len(input_array), *input_array)
ser.write(write_bytes)
```
To understand why I used 'B' and '<' you have to refer to pyserial documentation.
<https://docs.python.org/2/library/struct.html> | 797 |
35,877,007 | I need a cron job to work on a file named like this:
```
20160307_20160308_xxx_yyy.csv
(yesterday_today_xxx_yyy.csv)
```
And my cron job looks like this:
```
53 11 * * * /path/to/python /path/to/python/script /path/to/file/$(date -d "yesterday" +"\%Y\%m\%d")_$(date +"\%Y\%m\%d")_xxx_yyy.csv >> /path/to/logfile/cron.log 2>&1
```
Today's date is getting calculated properly but I am unable to get yesterday's date working. The error is:
```
IOError: [Errno 2] No such file or directory: 'tmp/_20160308_xxx_yyy.csv'
```
Please help! | 2016/03/08 | [
"https://Stackoverflow.com/questions/35877007",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2351197/"
] | I found the answer to my own question.
I needed to use this to get yesterday's date:
```
53 11 * * * /path/to/python /path/to/python/script /path/to/file/$(date -v-1d +"\%Y\%m\%d")_$(date +"\%Y\%m\%d")_xxx_yyy.csv >> /path/to/logfile/cron.log 2>&1
```
Hope it helps somebody! | This version worked for me. Maybe it can be helpful for someone:
```
53 11 * * * /path/to/python /path/to/python/script /path/to/file/$(date --date '-1 day' +"\%Y\%m\%d")_$(date +"\%Y\%m\%d")_xxx_yyy.csv >> /path/to/logfile/cron.log 2>&1
``` | 798 |
50,305,112 | I am trying to install pandas in my company computer.
I tried to do
```
pip install pandas
```
but operation retries and then timesout.
then I downloaded the package:
pandas-0.22.0-cp27-cp27m-win\_amd64.whl
and install:
```
pip install pandas-0.22.0-cp27-cp27m-win_amd64
```
But I get the following error:
>
>
> ```
> Retrying (Retry(total=4, connect=None, read=None, redirect=None,
> status=None)) after connection broken by
> 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection
> object at 0x0000000003F16320>, 'Connection to pypi.python.org timed
> out. (connect timeout=15)')': /simple/pytz/
> Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by
> 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection
> object at 0x0000000003F16C50>, 'Connection to pypi.python.org timed
> out. (connect timeout=15)')': /simple/pytz/
> Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by
> 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection
> object at 0x0000000003F16C18>, 'Connection to pypi.python.org timed
> out. (connect timeout=15)')': /simple/pytz/
> Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by
> 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection
> object at 0x0000000003F16780>, 'Connection to pypi.python.org timed
> out. (connect timeout=15)')': /simple/pytz/
> Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by
> 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection
> object at 0x0000000003F16898>, 'Connection to pypi.python.org timed
> out. (connect timeout=15)')': /simple/pytz/
> Could not find a version that satisfies the requirement pytz>=2011k (from pandas==0.22.0) (from versions: )
> No matching distribution found for pytz>=2011k (from pandas==0.22.0)
>
> ```
>
>
I did the same with package: `pandas-0.22.0-cp27-cp27m-win_amd64.whl`
I also tried to use proxies:
```
pip --proxy=IND\namit.kewat:xl123456@192.168.180.150:8880 install numpy
```
But I am unable to get pandas.
when I tried to access the site : <https://pypi.org/project/pandas/#files> I can access it without any problem on explorer | 2018/05/12 | [
"https://Stackoverflow.com/questions/50305112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4570833/"
] | This works for me:
```
pip --default-timeout=1000 install pandas
``` | In my case, my network was configured to use IPV6 by default, so I changed it to work with IPV4 only.
You can do that in the Network connections section in the control panel:
`'Control Panel\All Control Panel Items\Network Connections'`
[![enter image description here](https://i.stack.imgur.com/agR8k.png)](https://i.stack.imgur.com/agR8k.png)
Than disable the IPV6 option:
[![enter image description here](https://i.stack.imgur.com/CN9fw.png)](https://i.stack.imgur.com/CN9fw.png) | 799 |
37,015,123 | I have a user defined dictionary (sub-classing python's built-in dict object), which does not allow modifying the dict directly:
```
class customDict(dict):
"""
This dict does not allow the direct modification of
its entries(e.g., d['a'] = 5 or del d['a'])
"""
def __init__(self, *args, **kwargs):
self.update(*args, **kwargs)
def __setitem__(self,key,value):
raise Exception('You cannot directly modify this dictionary. Use set_[property_name] method instead')
def __delitem__(self,key):
raise Exception('You cannot directly modify this dictionary. Use set_[property_name] method instead')
```
My problem is that I am not able to deep copy this dictionary using copy.deepcopy. Here's an example:
```
d1 = customDict({'a':1,'b':2,'c':3})
print d1
d2 = deepcopy(d1)
print d2
```
where it throws the exception I've defined myself for setitem:
```
Exception: You cannot directly modify this dictionary. Use set_[property_name] method instead
```
I tried overwriting deepcopy method as follows as suggested [here](https://stackoverflow.com/questions/1500718/what-is-the-right-way-to-override-the-copy-deepcopy-operations-on-an-object-in-p):
```
def __deepcopy__(self, memo):
cls = self.__class__
result = cls.__new__(cls)
memo[id(self)] = result
for k, v in self.__dict__.items():
setattr(result, k, deepcopy(v, memo))
return result
```
This doesn't throw any errors but it returns an empty dictionary:
```
d1 = customDict({'a':1,'b':2,'c':3})
print d1
d2 = deepcopy(d1)
print d2
{'a': 1, 'c': 3, 'b': 2}
{}
```
Any ideas how to fix this? | 2016/05/03 | [
"https://Stackoverflow.com/questions/37015123",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3076813/"
] | Your `deepcopy` implementation does not work because the values of `dict` is not stored in `__dict__`. `dict` is a special class. You can make it work calling `__init__` with a deepcopy of the dict.
```
def __deepcopy__(self, memo):
def _deepcopy_dict(x, memo):
y = {}
memo[id(x)] = y
for key, value in x.iteritems():
y[deepcopy(key, memo)] = deepcopy(value, memo)
return y
cls = self.__class__
result = cls.__new__(cls)
result.__init__(_deepcopy_dict(self, memo))
memo[id(self)] = result
for k, v in self.__dict__.items():
setattr(result, k, deepcopy(v, memo))
return result
```
This program
```
d1 = customDict({'a': 2,'b': [3, 4]})
d2 = deepcopy(d1)
d2['b'].append(5)
print d1
print d2
```
Outputs
```
{'a': 2, 'b': [3, 4]}
{'a': 2, 'b': [3, 4, 5]}
``` | Something like this should work without having to change deepcopy.
```
x2 = customList(copy.deepcopy(list(x1)))
```
This will cast `x1` to a `list` deepcopy it then make it a `customList` before assigning to `x2`. | 804 |
66,469,499 | I made a memory game in python where players take turn picking two tiles in a grid to see if the revealed letters match.
I used two lists for this, one to store the letters e.g. `letters = ['A', 'A', 'B', 'B']` and the other to record the revealed letters that matches so far in the game e.g. `correctly_revealed = ['A', 'A', ' ', ' ']` and then use an `if letters == correctly_revealed` condition to end the game. The letters only get revealed if both letters in chosen tiles matches.
The letters do not always come in pairs however, meaning that the remaining unrevealed letters all be different letters e.g. `letters = ['B', 'B', 'C', 'D']` and `correctly_revealed = ['B', 'B', ' ', ' ']`. So I'm not sure how to set an `if` condition to end the game if it comes to that point | 2021/03/04 | [
"https://Stackoverflow.com/questions/66469499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14026994/"
] | This is indeed Red, Green, Blue, and Alpha, mapped to the 0.0 to 1.0 range, but with an additional transformation as well: These values have been converted from the sRGB colorspace to linear using the [sRGB transfer function](https://en.wikipedia.org/wiki/SRGB). (The back story here is, the [baseColorTexture](https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/README.md#pbrmetallicroughnessbasecolortexture) is supposed to be stored in the sRGB colorspace and subject to hardware sRGB decoding, but the [baseColorFactor](https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/README.md#pbrmetallicroughnessbasecolorfactor) has no hardware decoder and therefore is specified as linear values directly.)
The simple version of this, if you have a value between 0 and 255, is to divide by 255.0 and raise that to the power 2.2. This is an approximation, but works well. So for example if your Red value was 200, you could run the following formula, shown here as JavaScript but could be adapted to any language:
```
Math.pow(200 / 255, 2.2)
```
This would give a linear red value of about `0.58597`.
Note that the alpha values are not subject to the sRGB transfer function, so for them you simply divide by 255 and stop there.
Some packages will do this conversion automatically. For example, in Blender if you click on the Base Color color picker, you'll see it has a "Hex" tab that shows a typical CSS-style color, and an "RGB" tab that has the numeric linear values.
[![Blender color picker](https://i.stack.imgur.com/ClgrJ.png)](https://i.stack.imgur.com/ClgrJ.png)
This can be used to quickly convert typical CSS colors to linear space.
The VSCode [glTF Tools](https://github.com/AnalyticalGraphicsInc/gltf-vscode) (for which I'm a contributor) can also [show glTF color factors](https://twitter.com/emackey/status/1353792898370830340) as CSS values (going the other way). | It is RGBA format, but with numbers between 0 and 1. If you want to insert a color in the Format:
* RGB (255, 255, 255) [=white] divide all values by `255` and use `1` (=fully opaque for the last value
* RGBA (255, 0, 0, 255) [=fully opaque red] divide all components by `255`
Documentation can be found [here](http://here).
Actually the only difference is, that you can insert more color nuances because you have more than `255` possible values per channel. | 805 |
64,399,807 | I learning python web automation using selenium but when I trying to add a input for find\_element\_by\_name it is not working.
```
from selenium import webdriver
PATH = 'C:\Program Files (x86)\chromedriver.exe'
driver = webdriver.Chrome(PATH)
driver.get('https://kahoot.it')
codeInput = driver.find_element_by_name('gadmeId')
codeInput = 202206
```
I have downloaded the chromedriver but still it is not working. | 2020/10/17 | [
"https://Stackoverflow.com/questions/64399807",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14466617/"
] | First make sure that you spelled it "gameId" and not "gadmeId"
Also import send keys:
```
from selenium.webdriver.common.keys import Keys
```
Then you can send the gameId
```
codeInput = driver.find_element_by_name('gameId')
codeInput.send_keys('202206')
``` | To send value to the input tag.
```
codeInput.send_keys('202206')
```
Also
```
driver.find_element_by_name('gameId')
```
is suppose to be gameId. I would also use a wait after the driver.get() for page loading. | 806 |
61,122,276 | So I've been following Google's official tensorflow guide and trying to build a simple neural network using Keras. But when it comes to training the model, it does not use the entire dataset (with 60000 entries) and instead uses only 1875 entries for training. Any possible fix?
```py
import tensorflow as tf
from tensorflow import keras
import numpy as np
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images = train_images / 255.0
test_images = test_images / 255.0
class_names = ['T-shirt', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot']
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss= tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=10)
```
Output:
```
Epoch 1/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3183 - accuracy: 0.8866
Epoch 2/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3169 - accuracy: 0.8873
Epoch 3/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3144 - accuracy: 0.8885
Epoch 4/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3130 - accuracy: 0.8885
Epoch 5/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3110 - accuracy: 0.8883
Epoch 6/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3090 - accuracy: 0.8888
Epoch 7/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3073 - accuracy: 0.8895
Epoch 8/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3057 - accuracy: 0.8900
Epoch 9/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3040 - accuracy: 0.8905
Epoch 10/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3025 - accuracy: 0.8915
<tensorflow.python.keras.callbacks.History at 0x7fbe0e5aebe0>
```
Here's the original google colab notebook where I've been working on this: <https://colab.research.google.com/drive/1NdtzXHEpiNnelcMaJeEm6zmp34JMcN38> | 2020/04/09 | [
"https://Stackoverflow.com/questions/61122276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5935310/"
] | The number `1875` shown during fitting the model is not the training samples; it is the number of *batches*.
`model.fit` includes an optional argument `batch_size`, which, according to the [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit):
>
> If unspecified, `batch_size` will default to 32.
>
>
>
So, what happens here is - you fit with the default batch size of 32 (since you have not specified anything different), so the total number of batches for your data is
```
60000/32 = 1875
``` | It does not train on 1875 samples.
```
Epoch 1/10
1875/1875 [===
```
1875 here is the number of steps, not samples. In `fit` method, there is an argument, `batch_size`. The default value for it is `32`. So `1875*32=60000`. The implementation is correct.
If you train it with `batch_size=16`, you will see the number of steps will be `3750` instead of `1875`, since `60000/16=3750`. | 808 |
24,070,856 | I have a problem with QCheckBox.
I am trying to connect a boolean variable to a QCheckBox so that **when I change the boolean variable, the QCheckBox will be automatically checked or unchecked.**
My Question is similar to the Question below but in opposite way.
[question: Python3 PyQt4 Creating a simple QCheckBox and changing a Boolean variable](https://stackoverflow.com/questions/12736825/python3-pyqt4-creating-a-simple-qcheckbox-and-changing-a-boolean-variable)
I just copy one solution from that question to here.
```
import sys
from PyQt4.QtGui import *
from PyQt4.QtCore import *
class SelectionWindow(QMainWindow):
def __init__(self, parent=None):
super().__init__(parent)
self.ILCheck = False
ILCheckbox = QCheckBox(self)
ILCheckbox.setCheckState(Qt.Unchecked)
ILCheckbox.stateChanged.connect(self.ILCheckbox_changed)
MainLayout = QGridLayout()
MainLayout.addWidget(ILCheckbox, 0, 0, 1, 1)
self.setLayout(MainLayout)
def ILCheckbox_changed(self, state):
self.ILCheck = (state == Qt.Checked)
print(self.ILCheck)
if __name__ == '__main__':
app = QApplication(sys.argv)
window = SelectionWindow()
window.show()
window.ILCheck = True
sys.exit(app.exec_())
```
In this case, once I set ILCheck to True, QCheckBox will be checked.
Any help would be appreciated!!!
Thanks!!!!
---
Update:
I am using MVC on my project, the code above just a example show what I need. The bool value `ILCheck` will be use in other place, and I don't want call `ILCheckBox` in my model.
I expect that if I modify the value of `ILCheck`, `ILCheckBox` will react correctlly.
---
Update:
Thanks for all your reply and help. All your solution is great!!! The way I need is more like a **Modeling-View** solution so that I can separate modeling part from gui part. When I want to update something, I just need update modeling, and don't need pay attention to what gui looks like. I can't set this Bool property in View Class so that I can't use this solution.
I am not sure MVC is suitable in PyQT. I have a close solution like below with a problem.
```
from PyQt4 import QtGui, QtCore, uic
import sys
class CellList(QtGui.QStandardItemModel):
def __init__(self, cells = [], parent = None):
QtGui.QStandardItemModel.__init__(self, parent)
self.__cells = cells
self.add(cells)
def headerData(self, section, orientation, role):
if role == QtCore.Qt.DisplayRole:
return QtCore.QString("Cell id List")
def flags(self, index):
return QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable
def add(self, cells):
for i in xrange(0, len(cells)):
item = QtGui.QStandardItem('Cell %s' % cells[i][0])
if (cells[i][1]):
item.setCheckState(QtCore.Qt.Checked)
else:
item.setCheckState(QtCore.Qt.Unchecked)
item.setCheckable(True)
self.appendRow(item)
def update(self, cells = None):
# TODO: Making this working with out clean all old Cell
self.clear()
if cells is None:
cells = self.__cells
else:
print "hi"
self.__cells = cells
print cells
self.add(cells)
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
listView = QtGui.QListView()
listView.show()
data = [[85, True], (105, True), (123, False)]
model = CellList(data)
listView.setModel(model)
data[0][1] = False
model.update(data)
sys.exit(app.exec_())
```
There is a problem comes with this solution and I can't solve. I think only a view can set a Model. I am not sure if I can set a model to a single `QCheckBox`.
. | 2014/06/05 | [
"https://Stackoverflow.com/questions/24070856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2727296/"
] | [`property`](https://docs.python.org/2/library/functions.html#property) is the way to define a variable that does additional work upon assigning/accessing. Below is the code modified for that purpose. It changes `ILCheck` to a property such that it'll also update the checkbox upon assigning. Proper error checking for `.setter` is left out but most probably needed.
```
import sys
from PyQt4.QtGui import *
from PyQt4.QtCore import *
class SelectionWindow(QWidget):
def __init__(self, parent=None):
super(SelectionWindow, self).__init__(parent)
self._ILCheck = False
self.ILCheckbox = QCheckBox(self)
self.ILCheckbox.setCheckState(Qt.Unchecked)
self.ILCheckbox.stateChanged.connect(self.ILCheckbox_changed)
MainLayout = QGridLayout()
MainLayout.addWidget(self.ILCheckbox, 0, 0, 1, 1)
self.setLayout(MainLayout)
def ILCheckbox_changed(self, state):
self._ILCheck = (state == Qt.Checked)
print(self.ILCheck)
@property
def ILCheck(self):
return self._ILCheck
@ILCheck.setter
def ILCheck(self, value):
self._ILCheck = value
self.ILCheckbox.setChecked(value)
if __name__ == '__main__':
app = QApplication(sys.argv)
window = SelectionWindow()
window.show()
window.ILCheck = True
sys.exit(app.exec_())
``` | just use `ILCheckbox.setCheckState(Qt.Checked)` after calling ILCheck.
You don't neet signals here since you can call a slot sirectly.
If you want to do use this feature more than once, you should consider writing a setter which changes the state of `self.ILCheck` and emits a signal.
Edit after your clarification:
* You can use the setter approach, but instead of setting the value of ILCheckbox, you should call `your_properly_named_anddefine_signal.emit()`. For more information about signal definition see e.g. <http://www.pythoncentral.io/pysidepyqt-tutorial-creating-your-own-signals-and-slots/>.
* You'll have to connect your signal to a slot which will set the checkbox correctly. This connection could be made in the `__init__()` of your controller class. | 811 |
3,014,223 | We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud)
For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database.
### Requirements
There are a couple of things we wan't to test in the following priority:
1. Are the sources valid, hence "compilable"?
2. For packages, with respect to a certain database, would they compile?
3. Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules?
Also,
* the tool must run head-less (commandline, ant, ...)
* we want to do analysis on a partial code base (changed sources only)
### Tools
We did a little research and found the following tools that could potencially help:
* **[Cast Application Intelligence Platform (AIP)](http://www.castsoftware.com/Product/Application-Intelligence-Platform.aspx):** Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format.
* **[Toad for Oracle](http://www.toadsoft.com/toad_oracle.htm):** The Professional version is said to include something called Xpert validates a set of rules against a code base.
* **[Sonar](http://www.sonarsource.com) + [PL/SQL-Plugin](http://www.sonarsource.com/plugins/plugin-plsql/):** Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base.
* **[Semantic Designs DMSToolkit](http://www.semanticdesigns.com/Products/DMS/DMSToolkit.html?Home=PLSQLTools):** Quite general analysis of source code base. Commandline available?
* **[Semantic Designs Clones Detector](http://www.semanticdesigns.com/Products/Clone/index.html?Home=PLSQLTools):** Detects clones. But also via command line?
* **[Fortify Source Code Analyzer](http://www.fortify.com/products/detect/in_development.jsp):** Seems to be focussed on security issues. But maybe it is extensible? *[more...](http://products.enterpriseitplanet.com/security/security/1101145596.html)*
So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here?
Any ideas? Other products? Experiences?
### Related Questions on SO:
* [Any Static Code Analysis Tools for Stored Procedures?](https://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures)
* <https://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql>
* [Is there a static analysis tool for Python, Ruby, Sql, Cobol, Perl, and PL/SQL?](https://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql) | 2010/06/10 | [
"https://Stackoverflow.com/questions/3014223",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11562/"
] | I think that this blog describes the needed process:
<http://www.theserverlabs.com/blog/?p=435>
Please check and let me know what you think about it. | Our approach is to keep each database object (tables, views, functions, packages, sprocs etc) in its own file under source control and have an integration server ([TeamCity](http://www.jetbrains.com/teamcity/), [Hudson](http://hudson-ci.org/) etc) do a nightly build of the database - from source - where it drops and recreates the schema before checking for compilation errors in the `user_errors` system table. This lets you know when someone has introduced compilation errors into the build.
The next step is to use something like [PLUTO](http://code.google.com/p/pluto-test-framework/) to add unit tests to your PL/SQL code and add those into the nightly build task. For us, this has involved having sample test datasets (also under source control) that allow us to get the database to a "known state" for the purposes of testing.
I've not found anything that helps us much with any of the above so it's mainly a collection of Ant tasks, custom shell scripts and wizardry, which basically apply the required DDL to an empty database and use `DBMS_UTILITY.COMPILE_SCHEMA()` to, uh, compile the schema. You can add more fancy stuff later, like back-tracing objects which fail to compile or fail tests to a specific *submit* in source control, and issue "blame mail".
I'd be really interested to see if anyone else has a better approach or if there's an off-the-shelf product that does this for me! | 814 |
17,410,970 | In my program, many processes can try to create a file if the file doesnt exist currently.
Now I want to ensure that only one of the processes is able to create the file and the rest get an exception if its already been created(kind of process safe and thread safe open() implementation).
How can I achieve this in python.
Just for clarity, what I want is that the file is created if it doesnt exist. But if it already exists then throw an exception. And this all should happen atomicly. | 2013/07/01 | [
"https://Stackoverflow.com/questions/17410970",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1421499/"
] | In Python 2.x:
```
import os
fd = os.open('filename', os.O_CREAT|os.O_EXCL)
with os.fdopen(fd, 'w') as f:
....
```
In Python 3.3+:
```
with open('filename', 'x') as f:
....
``` | If you're running on a Unix-like system, open the file like this:
```
f = os.fdopen(os.open(filename, os.O_CREAT | os.O_WRONLY | os.O_EXCL), 'w')
```
The `O_EXCL` flag to `os.open` ensures that the file will only be created (and opened) if it doesn't already exist, otherwise an `OSError` exception will be raised. The existence check and creation will be performed atomically, so you can have multiple threads or processes contend to create the file, and only one will come out successful. | 816 |
69,499,962 | So I have this big .csv in my work that looks something like this:
```
Name| Adress| Email| Paid Value
John| x street | John@dmail.com| 0|
Chris| c street | Chris@dmail.com| 100|
Rebecca| y street| RebeccaFML|@dmail.com|177|
Bozo | z street| BozoSMH|@yahow.com|976|
```
As you can see, the .csv is seperated by pipes and the email of the last two people have pipes in it, causing formating problems.
There are only 2 customers with this problem but these fellas will have more and more entries every month and we have to manually find them in the csv and change the email by hand . It is a very boring and time consuming process because the file is that big.
We use python to deal with data, I researched a bit and couldn't find anything to help me with it, any ideas?
Edit: So what I want is a way to change this email adresses automatically through code (like RebeccaFML|@dmail.com -> RebeccaFML@dmail.com). It doenst need to be pandas or anything, I am accepting ideas of any sort. The main thing is I only know how to replace once I read the file in python, but since these registers have the pipes in it, they dont read properly.
Ty in advance | 2021/10/08 | [
"https://Stackoverflow.com/questions/69499962",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14166159/"
] | with subset using `dplyr`
you can use the code below
```
library(dplyr)
df %>% subset(!is.na(value) & bs_Scores != "bs_24" )
``` | A `dplyr` solution:
```r
library(tidyverse)
bs_scores <- tibble::tribble(
~bs_Scores, ~value,
"bs_0", 16.7,
"bs_1", 41.7,
"bs_12", 33.3,
"bs_24", NA,
"bs_0", 25,
"bs_1", 41.7,
"bs_12", NA,
"bs_24", 0,
"bs_0", 16.7,
"bs_1", 41.7,
"bs_12", 16.7,
"bs_24", 16.7,
"bs_0", NA
)
bs_scores %>%
filter(!(bs_Scores == "bs_24" & (is.na(value))))
#> # A tibble: 12 × 2
#> bs_Scores value
#> <chr> <dbl>
#> 1 bs_0 16.7
#> 2 bs_1 41.7
#> 3 bs_12 33.3
#> 4 bs_0 25
#> 5 bs_1 41.7
#> 6 bs_12 NA
#> 7 bs_24 0
#> 8 bs_0 16.7
#> 9 bs_1 41.7
#> 10 bs_12 16.7
#> 11 bs_24 16.7
#> 12 bs_0 NA
```
Created on 2021-10-11 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1) | 817 |
50,675,758 | Help me please with understanding some of asyncio things.
I want to realize if its possible to do next:
I have synchronous function that for example creates some data in remote API (API can returns success or fail):
```
def sync_func(url):
... do something
return result
```
I have coroutine to run that sync operation in executor:
```
async def coro_func(url)
loop = asyncio.get_event_loop()
fn = functools.partial(sync_func, url)
return await loop.run_in_executor(None, fn)
```
Next I want to do smth like
1. If remote API does not respond for 1 sec, I want to start next url to be processed, but I want to know result of that first task (when API finally will send response) that was broken by timeout. I wrap coro\_func() in a shield() to avoid it from cancellation. But don't have an idea how I can check result after ...
`list_of_urls = [url1, ... urlN]
map_of_task_results = {}
async def task_processing():
for url in list_of_urls:
res = asyncio.wait_for(shield(coro_func(url), timeout=1))
if res == 'success':
return res
break
else:
map_of_task_results[url] = res
return "all tasks were processed"`
P.S. When I'm tried to access shield(coro) result - it has CancelledError exception.. but I expect that there might be result, because I 'shielded' task.
`try:
task = asyncio.shield(coro_func(url))
result = await asyncio.wait_for(task, timeout=API_TIMEOUT)
except TimeoutError as e:
import ipdb; ipdb.set_trace()
pending_tasks[api_details['api_url']] = task`
```
ipdb> task
<Future cancelled created at
/usr/lib/python3.6/asyncio/base_events.py:276>
ipdb> task.exception
<built-in method exception of _asyncio.Future object at 0x7f7d41eeb588>
ipdb> task.exception()
```
\*\*\* concurrent.futures.\_base.CancelledError
` | 2018/06/04 | [
"https://Stackoverflow.com/questions/50675758",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2235755/"
] | If you create a future (task) out of your coroutine before you shield it, you can always check it later. For example:
```
coro_task = loop.create_task(coro_func(url))
try:
result = await asyncio.wait_for(asyncio.shield(coro_task), API_TIMEOUT)
except asyncio.TimeoutError:
pending_tasks[api_details['api_url']] = coro_task
```
You can use [`coro_task.done()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.done) to check if the task has completed in the meantime and call [`result()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future.result) if so or `await` it if not. If needed you can even use `shield`/`wait_for` on it again, and so on. | Ok, thanks @user4815162342 I figured out how to process tasks those were interrupted by timeout - in common my solution now looks like:
```
def sync_func(url):
... do something probably long
return result
async def coro_func(url)
loop = asyncio.get_event_loop()
fn = functools.partial(sync_func, url)
return await loop.run_in_executor(None, fn)
async def waiter(pending_tasks):
count = 60
while not all(map(lambda x: x.done(), pending_tasks.values())) and count > 0:
logger.info("Waiting for pending tasks..")
await asyncio.sleep(1)
count -= 1
# Finally process results those were in pending
print([task.result() for task in pending_tasks.values()])
async def task_processing(...):
list_of_urls = [url1, ... urlN]
pending_tasks = {}
for url in list_of_urls:
try:
task = asyncio.Task(coro_func(url))
result = await asyncio.wait_for(asyncio.shield(task), timeout=API_TIMEOUT)
except TimeoutError as e:
pending_tasks[url] = task
if not result or result != 'success':
continue
else:
print('Do something good here on first fast success, response to user ASAP in my case.')
break
# here start of pending task processing
loop = asyncio.get_event_loop()
loop.create_task(waiter(pending_tasks))
```
So I'm collecting tasks those were interrupted by concurrent.future.TimeoutError in the dict mapping object, then I run task with waiter() coro that tries to wait 60 sec while pending tasks will get status done or 60 sec will run out.
In addition to words, my code placed into Tornado's RequestHandler and Tornado uses asyncio event loop.
So after N attempts to get fast response from one url from url's list, I can then answer to user and do not lose results of tasks those were initiated and interrupted with TimeoutError. (I can process them after I respond to the user, so that's was my main idea)
I hope it saves a lot of time for somebody looking for the same :) | 819 |
64,341,672 | ```
totalquestions = int(5)
while totalquestions > 0 :
num1 = randint(0,9)
num2 = randint(0,9)
print(num1)
print(num2)
answer = input(str("What is num1 ** num2?"))
if answer == (num1 ** num2):
print("correct")
else:
print("false")
```
I'm trying to create a quiz program where the user is given 2 random numbers and has to find the correct exponentiation of the 2 numbers given. Whenever i try to run this program I always get a false print statement even if the value I've inputted is correct. Sorry if this has a very simple solution I'm still a noob at python. | 2020/10/13 | [
"https://Stackoverflow.com/questions/64341672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14444439/"
] | You need to collect the arguments first, *then* pass them to `Person`.
```
def getPeople(num):
people = []
for i in range(num):
name = input("What is the persons name?: ")
age = input("What is the persons age?: ")
computing = input("What is the persons Computing score?: ")
maths = input("What is the persons Maths score?: ")
english = input("What is the persons English score?: ")
people.append(Person(name, age, computing, maths, english))
return people
people = getPeople(5)
```
Note that there is a good case for using a class method here.
```
class Person:
def __init__(self, name, age, computing, maths, english):
self.name = name
self.age = age
self.computing = computing
self.maths = maths
self.english = english
@classmethod
def from_input(cls):
name = input("What is the persons name?: ")
age = input("What is the persons age?: ")
computing = input("What is the persons Computing score?: ")
maths = input("What is the persons Maths score?: ")
english = input("What is the persons English score?: ")
return cls(name, age, computing, maths, english)
def getPeople(num):
return [Person.from_input() for _ in range(num)]
``` | You have added an init method for the class, so you need to pass all those variables as arguments when you call the `Person()` class. As an example:
```
name = input()
age = input()
....
new_person = Person(name, age, ...)
people.append(new_person)
``` | 820 |
39,225,263 | The bottleneck of my code is currently a conversion from a Python list to a C array using ctypes, as described [in this question](https://stackoverflow.com/questions/4145775/how-do-i-convert-a-python-list-into-a-c-array-by-using-ctypes).
A small experiment shows that it is indeed very slow, in comparison of other Python instructions:
```
import timeit
setup="from array import array; import ctypes; t = [i for i in range(1000000)];"
print(timeit.timeit(stmt='(ctypes.c_uint32 * len(t))(*t)',setup=setup,number=10))
print(timeit.timeit(stmt='array("I",t)',setup=setup,number=10))
print(timeit.timeit(stmt='set(t)',setup=setup,number=10))
```
Gives:
```
1.790962941000089
0.0911122129996329
0.3200237319997541
```
I obtained these results with CPython 3.4.2. I get similar times with CPython 2.7.9 and Pypy 2.4.0.
I tried runing the above code with `perf`, commenting the `timeit` instructions to run only one at a time. I get these results:
**ctypes**
```
Performance counter stats for 'python3 perf.py':
1807,891637 task-clock (msec) # 1,000 CPUs utilized
8 context-switches # 0,004 K/sec
0 cpu-migrations # 0,000 K/sec
59 523 page-faults # 0,033 M/sec
5 755 704 178 cycles # 3,184 GHz
13 552 506 138 instructions # 2,35 insn per cycle
3 217 289 822 branches # 1779,581 M/sec
748 614 branch-misses # 0,02% of all branches
1,808349671 seconds time elapsed
```
**array**
```
Performance counter stats for 'python3 perf.py':
144,678718 task-clock (msec) # 0,998 CPUs utilized
0 context-switches # 0,000 K/sec
0 cpu-migrations # 0,000 K/sec
12 913 page-faults # 0,089 M/sec
458 284 661 cycles # 3,168 GHz
1 253 747 066 instructions # 2,74 insn per cycle
325 528 639 branches # 2250,011 M/sec
708 280 branch-misses # 0,22% of all branches
0,144966969 seconds time elapsed
```
**set**
```
Performance counter stats for 'python3 perf.py':
369,786395 task-clock (msec) # 0,999 CPUs utilized
0 context-switches # 0,000 K/sec
0 cpu-migrations # 0,000 K/sec
108 584 page-faults # 0,294 M/sec
1 175 946 161 cycles # 3,180 GHz
2 086 554 968 instructions # 1,77 insn per cycle
422 531 402 branches # 1142,636 M/sec
768 338 branch-misses # 0,18% of all branches
0,370103043 seconds time elapsed
```
The code with `ctypes` has less page-faults than the code with `set` and the same number of branch-misses than the two others. The only thing I see is that there are more instructions and branches (but I still don't know why) and more context switches (but it is certainly a consequence of the longer run time rather than a cause).
I therefore have two questions:
1. Why is ctypes so slow ?
2. Is there a way to improve performances, either with ctype or with another library? | 2016/08/30 | [
"https://Stackoverflow.com/questions/39225263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4110059/"
] | Here's a little trick, it works for all sorts of situations including yours. But also for trailing comma's for example.
Concept
-------
Instead of printing your text directly, store it in an array like so:
```
$information_to_print = ['col1', 'col2', 'col3'];
$cols = [];
foreach ($information_to_print as $col) {
$cols[] = 'This is: ' . $col;
}
```
Now all you have to do is implode the array, using closing and opening tags as glue, and wrap in corresponding elements.
```
echo '<tr><td>' . implode('</td><td>', $cols) . '</td></tr>;
```
Implementation
--------------
In your particular case it would look something like this
```
<?php
$entriesConverted = [
['column_size' => 1, 'content' => 'Item 1', 'text_align' => 'Center'],
['column_size' => 0.5, 'content' => 'Item 2', 'text_align' => 'Center'],
['column_size' => 0.75, 'content' => 'Item 3', 'text_align' => 'Center'],
];
// Set the sum to 0 to keep things clean and simple
$sum = 0;
$blocks = [];
$block_i = 0;
// Echo the starting div
echo '<div class="content-block homepage-block row">', PHP_EOL;
// Loop through the new columns
foreach($entriesConverted as $newEntry){
for ($i=$newEntry['column_size']; $i <= 1; $i++) {
$sum += $i;
$newEntry['column_size'] = str_replace([0.25, 0.33, 0.5, 0.67, 0.75, 1], ['col-md-3', 'col-md-4', 'col-md-6', 'col-md-8', 'col-md-9', 'col-md-12'], $newEntry['column_size']);
$newEntry['text_align'] = str_replace(['Left', 'Center', 'Right', 'Justified'], ['text-left', 'text-center', 'text-right', 'text-justify'], $newEntry['text_align']);
if (!isset($blocks[$block_i])) { $blocks[] = ''; }
$blocks[$block_i] .= '<div class="' . $newEntry['column_size'] . ' ' .
$newEntry['text_align'] . '">' . $newEntry['content'] .
'</div>';
}
if($sum == 1){
$sum = 0;
++$block_i;
}
}
echo implode("\n</div>\n<div class=\"content-block homepage-block row\">\n", $blocks);
// Echo closing div
echo PHP_EOL, '</div>';
```
See a working version here: <http://ideone.com/28uXCT>
*Note: I added some newlines to keep the output readable*
**warning:** Be aware of a bug in your code. As you can see in the output of ideone, the total column span of the second row exceeds 12. | I think this might be easier if the row elements are inside the loop rather than outside. For example here's a quick pseudocode:
```
array items
sum = 0
loop through items
open row
print output for this item
increment sum
if sum is 1
set sum 0
close row
if this is not the last item in the array
open next row
``` | 823 |
18,785,063 | I've created virtualenv for Python 2.7.4 on Ubuntu 13.04. I've installed python-dev.
I have [the error](http://pastebin.com/YQfdYDVK) when installing numpy in the virtualenv.
Maybe, you have any ideas to fix? | 2013/09/13 | [
"https://Stackoverflow.com/questions/18785063",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1212100/"
] | The problem is `SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel.`
so do the following in order to obtain 'Python.h'
make sure apt-get and gcc are up to date
```
sudo apt-get update
sudo apt-get upgrade gcc
```
then install the python2.7-dev
```
sudo apt-get install python2.7-dev
```
and I see that you have most probably already done the above things.
pip will eventually spit out another error for not being able to write into `/user/bin/blahBlah/dist-packages/` or something like that because it couldn't figure out that it was supposed to install your desiredPackage (e.g. numpy) within the active env (the env created by virtualenv which you might have even changed directory to while doing all this)
so do this:
```
pip -E /some/path/env install desiredPackage
```
that should get the job done... hopefully :)
**---Edit---**
From PIP Version 1.1 onward, the command `pip -E` doesn't work. The following is an excerpt from the release notes of version 1.1 (<https://pip.pypa.io/en/latest/news.html>)
Removed `-E/--environment` option and `PIP_RESPECT_VIRTUALENV`; both use a restart-in-venv mechanism that's broken, and neither one is useful since every virtualenv now has pip inside it. Replace `pip -E path/to/venv install Foo` with `virtualenv path/to/venv && path/to/venv/pip install Foo` | This is probably because you do not have the `python-dev` package installed. You can install it like this:
```
sudo apt-get install python-dev
```
You can also install it via the Software Center:
![enter image description here](https://i.stack.imgur.com/mNiu0.png) | 824 |
22,099,882 | I need some help with the encoding of a list. I'm new in python, sorry.
First, I'm using Python 2.7.3
I have two lists (entidad & valores), and I need to get them encoded or something of that.
My code:
```
import urllib
from bs4 import BeautifulSoup
import csv
sock = urllib.urlopen("http://www.fatm.com.es/Datos_Equipo.asp?Cod=01HU0010")
htmlSource = sock.read()
sock.close()
soup = BeautifulSoup(htmlSource)
form = soup.find("form", {'id': "FORM1"})
table = form.find("table")
entidad = [item.text.strip() for item in table.find_all('td')]
valores = [item.get('value') for item in form.find_all('input')]
valores.remove('Imprimir')
valores.remove('Cerrar')
header = entidad
values = valores
print values
out = open('tomate.csv', 'w')
w = csv.writer(out)
w.writerow(header)
w.writerow(values)
out.close()
```
the log: *UnicodeEncodeError: 'ascii' codec can't encode character*
any ideas? Thanks in advance!! | 2014/02/28 | [
"https://Stackoverflow.com/questions/22099882",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3361555/"
] | You should encode your data to utf-8 manually, csv.writer didnt do it for you:
```
w.writerow([s.encode("utf-8") for s in header])
w.writerow([s.encode("utf-8") for s in values])
#w.writerow(header)
#w.writerow(values)
``` | This appears to be the same type of problem as had been found here [UnicodeEncodeError in csv writer in Python](http://love-python.blogspot.com/2012/04/unicodeencodeerror-in-csv-writer-in.html)
>
> UnicodeEncodeError in csv writer in Python
>
> Today I was writing a
> program that generates a csv file after some processing. But I got the
> following error while trying on some test data:
>
>
> writer.writerow(csv\_li) UnicodeEncodeError: 'ascii' codec can't encode
> character u'\xbf' in position 5: ordinal not in range(128)
>
>
> I looked into the documentation of csv module in Python and found a
> class named UnicodeWriter. So I changed my code to
>
>
> writer = UnicodeWriter(open("filename.csv", "wb"))
>
>
> Then I tried to run it again. It got rid of the previous
> UnicodeEncodeError but got into another error.
>
>
> self.writer.writerow([s.encode("utf-8") for s in row]) AttributeError:
> 'int' object has no attribute 'encode'
>
>
> So, before writing the list, I had to change every value to string.
>
>
> row = [str(item) for item in row]
>
>
> I think this line can be added in the writerow function of
> UnicodeWriter class.
>
>
> | 834 |
41,286,526 | I am trying to setup a queue listener for laravel and cannot seem to get supervisor working correctly. I get the following error when I run `supervisorctl reload`:
`error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib/python2.7/socket.py line: 228`
The file DOES exist. If try to run `sudo supervisorctl` I get this
`unix:///var/run/supervisor.sock no such file`.
I've tried reinstall supervisor and that did not work either. Not sure what to do here.
I'm running Laravel Homestead (Ubuntu 16.04).
Result of `service supervisor status`:
`vagrant@homestead:~/Code$ sudo service supervisor status
● supervisor.service - Supervisor process control system for UNIX
Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Thu 2016-12-22 11:06:21 EST; 41s ago
Docs: http://supervisord.org
Process: 23154 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS)
Process: 23149 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2)
Main PID: 23149 (code=exited, status=2)` | 2016/12/22 | [
"https://Stackoverflow.com/questions/41286526",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1965066/"
] | You should run `sudo service supervisor start` when you are in the supervisor dir.
Worked for me. | I had a very similar problem (Ubuntu 18.04) and searched similar threads to no avail so answering here with some more comprehensive answers.
Lack of a sock file or socket error is only an indicator that supervisor is not running. If a simple restart doesn't work its either 1. not installed, or 2. failing to start. In my case nothing was being logged to the supervisor.log file for me to know why it was failing until I ran the following command (-n to run in foreground) only to find out that there was a leftover configuration file for a project that had been deleted that I missed.
```
/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf
```
Once I deleted the bad/leftover file in the conf.d folder and started it back up with `sudo service supervisor start` everything worked.
Here are some comprehensive steps you can take.
1. Is supervisor installed? `dpkg -l | grep supervisor` If not reinstall `sudo apt install supervisor`
2. Are all instances of supervisor stopped? `systemctl stop supervisor` Lingering supervisor processes can be found `ps aux | grep supervisor` then `kill -9 PID`.
3. Is supervisor.conf in the right location `/etc/supervisor/supervisor.conf` and there are no syntax errors? Reinstall from package would correct this.
4. Move your specific files in conf.d/ temporarily out of the folder to try and start with no additional config files. If it starts right up `sudo service supervisor start` the likelihood of an error in your project .conf file exists.
5. Check status with `sudo service supervisor status`.
6. Move your .conf files one by one back into conf.d/ and restart `sudo service supervisor restart`. Be sure to check with `sudo service supervisor status` between. If it fails you know which .conf file has an issue and can ask for specific help.
7. check everything is running with `supervisorctl status` and if not start with `supervisorctl start all`. | 835 |
18,995,555 | I'm trying check whether the short int have digits that contains in long int. Instead this came out:
```
long int: 198381998
short int: 19
Found a match at 0
Found a match at 1
Found a match at 2
Found a match at 3
Found a match at 4
Found a match at 5
Found a match at 6
Found a match at 7
```
It's suppose to look like this: (Correct one)
```
long int: 198381998
short int: 19
Found a match at 0
Found a match at 5
```
Code:
```
longInt = ( input ("long int: "))
floatLong = float (longInt)
shortInt = ( input ("short int: "))
floatShort = float (shortInt)
max_digit = int (math.log10(floatLong)) #Count the no. of long int
i = int(math.log10(floatShort)) # Count the no. shortInt that is being input
for string in range (max_digit):
if ( shortInt in longInt): # Check whether there is any digit in shortInt
# that contains anything inside longInt
print ( "Found a match at ", string)
```
Without using any built-in function of python, no list or string.etc method. | 2013/09/25 | [
"https://Stackoverflow.com/questions/18995555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2811732/"
] | You're passing `allData` as an argument to the mapping, but it isn't defined anywhere. You want `data.users` instead (*not* `data` because then `ko.mapping.fromJSON` will return a single object with one key, `users` whose value will be an `observableArray`; you'll confuse Knockout if you try to use that object as the value of another `observableArray`, namely `self.users`). | Switching to this .ajax call seemed to resolve the issue.
```
// Load initial state from server, convert it to User instances, then populate self.users
$.ajax({
url: '/sws/users/index',
dataType: 'json',
type: 'POST',
success: function (data) {
self.users(data['users']);
console.log(data['users']);
}
});
``` | 845 |
63,087,586 | In my views.py file of my Django application I'm trying to load the 'transformers' library with the following command:
```
from transformers import pipeline
```
This works in my local environment, but on my Linux server at Linode, when I try to load my website, the page tries to load for 5 minutes then I get a Timeout error. I don't understand what is going on, I know I have installed the library correctly. I have also run the same code in the python shell on my server and it loads fine, it's is just that if I load it in my Django views.py file, no page of my website loads.
My server: Ubuntu 20.04 LTS, Nanode 1GB: 1 CPU, 25GB Storage, 1GB RAM
Library: transformers==3.0.2
I also have the same problem when I try to load tensorflow. All the other libraries are loading fine, like pytorch and pandas etc. I've been trying to solve this problem since more than a week, I've also changed hosts from GCP to Linode, but it's still the same.
**Edit:** I created a new server and installed everything from scratch and used a virtualenv this time, but still its the same problem. Following are the installed libraries outputted from `pip freeze`:
```
asgiref==3.2.10
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
Django==3.0.7
djangorestframework==3.11.0
filelock==3.0.12
future==0.18.2
idna==2.10
joblib==0.16.0
numpy==1.19.1
packaging==20.4
Pillow==7.2.0
pyparsing==2.4.7
pytz==2020.1
regex==2020.7.14
requests==2.24.0
sacremoses==0.0.43
sentencepiece==0.1.91
six==1.15.0
sqlparse==0.3.1
tokenizers==0.8.1rc1
torch==1.5.1+cpu
torchvision==0.6.1+cpu
tqdm==4.48.0
transformers==3.0.2
urllib3==1.25.10
```
I also know transformers library is installed because if I try to import some library that doesn't exist then I simply get an error, like I should. But in this case it just loads forever and doesn't output any error. This is so bizarre. | 2020/07/25 | [
"https://Stackoverflow.com/questions/63087586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4823067/"
] | Maybe you just have to create or update your *requirements.txt* file.
Here is the command : `pip freeze > requirements.txt` | Based on this [answer](https://serverfault.com/a/514251)
>
> Some third party packages for Python which use C extension modules, and this includes scipy and numpy, will only work in the Python main interpreter and cannot be used in sub interpreters as mod\_wsgi by default uses.
>
>
>
`transformers` library uses numpy, so you should force the WSGI application to run in the main interpreter of the process by changing apache config:
```
## open apache config
$ nano /etc/apache2/sites-enabled/000-default.conf
## add this line to apache config
WSGIApplicationGroup %{GLOBAL}
## restart apache
$ systemctl restart apache2
```
now it works!!
for more information visit the link above. | 846 |
23,728,065 | I have been banging my head against the wall with this for long enough that I am okay to turn here at this point.
I have a page with iframe:
```
<iframe frameborder="0" allowtransparency="true" tabindex="0" src="" title="Rich text editor, listing_description" aria-describedby="cke_18" style="width:100%;height:100%">
```
When I get by xpath using:
`'//*[@aria-describedby="cke_18"]'`
I get a web element where:
```
>>> elem
<selenium.webdriver.remote.webelement.WebElement object at 0x104327b50>
>>> elem.id
u'{3dfc8264-71bc-c948-882a-acd6a8b93ab5}'
>>> elem.is_displayed
<bound method WebElement.is_displayed of <selenium.webdriver.remote.webelement.WebElement object at 0x104327b50>>
```
Now, when I try to extract to put information in this iframe, I get something along the following error:
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Applications/Spyder.app/Contents/Resources/lib/python2.7/spyderlib/widgets/externalshell/sitecustomize.py", line 560, in debugfile
debugger.run("runfile(%r, args=%r, wdir=%r)" % (filename, args, wdir))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/bdb.py", line 400, in run
exec cmd in globals, locals
File "<string>", line 1, in <module>
File "/Applications/Spyder.app/Contents/Resources/lib/python2.7/spyderlib/widgets/externalshell/sitecustomize.py", line 540, in runfile
execfile(filename, namespace)
File "/Users/jasonmellone/Documents/PythonProjects/nakedApts.py", line 88, in <module>
a = elem.find_element_by_xpath(".//*")
File "/Library/Python/2.7/site-packages/selenium-2.41.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 201, in find_element_by_xpath
return self.find_element(by=By.XPATH, value=xpath)
File "/Library/Python/2.7/site-packages/selenium-2.41.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 377, in find_element
{"using": by, "value": value})['value']
File "/Library/Python/2.7/site-packages/selenium-2.41.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 370, in _execute
return self._parent.execute(command, params)
File "/Library/Python/2.7/site-packages/selenium-2.41.0-py2.7.egg/selenium/webdriver/remote/webdriver.py", line 166, in execute
self.error_handler.check_response(response)
File "/Library/Python/2.7/site-packages/selenium-2.41.0-py2.7.egg/selenium/webdriver/remote/errorhandler.py", line 164, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: u'Unable to locate element: {"method":"xpath","selector":".//*"}' ; Stacktrace:
at FirefoxDriver.prototype.findElementInternal_ (file:///var/folders/8x/0msd5dd13l9453ff9739rj7w0000gn/T/tmpmH4ARe/extensions/fxdriver@googlecode.com/components/driver_component.js:8905)
at FirefoxDriver.prototype.findChildElement (file:///var/folders/8x/0msd5dd13l9453ff9739rj7w0000gn/T/tmpmH4ARe/extensions/fxdriver@googlecode.com/components/driver_component.js:8917)
at DelayedCommand.prototype.executeInternal_/h (file:///var/folders/8x/0msd5dd13l9453ff9739rj7w0000gn/T/tmpmH4ARe/extensions/fxdriver@googlecode.com/components/command_processor.js:10884)
at DelayedCommand.prototype.executeInternal_ (file:///var/folders/8x/0msd5dd13l9453ff9739rj7w0000gn/T/tmpmH4ARe/extensions/fxdriver@googlecode.com/components/command_processor.js:10889)
at DelayedCommand.prototype.execute/< (file:///var/folders/8x/0msd5dd13l9453ff9739rj7w0000gn/T/tmpmH4ARe/extensions/fxdriver@googlecode.com/components/command_processor.js:10831)`
Now, I, not being a selenium developer, have no idea what this means.
When I run the following code:
```
elem = Helper.getElementByxPath(mydriver,'//*[@aria-describedby="cke_18"]',"ABC");
mydriver.switch_to_frame(elem);
```
The above runs where `Helper.getElementByxPath` is:
```
def getElementByxPath(mydriver,xPath,valueString):
try:
a = mydriver.find_element_by_xpath(xPath);
a.send_keys(valueString);
return a;
except:
print "Unexpected error:", sys.exc_info()[0];
return 0;
a = elem.find_element_by_xpath(".//*")
```
Giving me the following:
```
>>> elem.id
u'{8be4819b-f828-534a-9eb2-5b791f42b99a}'
```
And the following statement:
```
a = elem.find_element_by_xpath(".//*")
```
Gives me another huge error.
The frustrating part to me is the following:
1. I don't need to get information out of the embedded input in the iframe, I just want to sendkeys.
2. I am **HAPPY** to just "Keys.TAB" until I reach the proper box, and Cursor.location.element.send\_keys (pseudo code).
3. I just want to type text on the page as the CURSOR IS ALREADY IN THE RIGHT PLACE (can't i just do this easily?)
My goal is to just send keys here, not to do anything deeper, and I cannot seem to solve this problem without getting something like the above issue.
Is there a way to solve this? I am quite defeated and hope someone has an answer.
Thank you! | 2014/05/19 | [
"https://Stackoverflow.com/questions/23728065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/360826/"
] | you need Wget for Windows, you can download it from here <http://gnuwin32.sourceforge.net/packages/wget.htm>
open notepad and paste your code, save as "myscript.bat"
make sure it doesn't have .txt
put your "myscript.bat" in the same folder with wget.exe
now try it, it should work | For a newer firmware version, U need to add referer and user-agent. Try this, work for me:
```
wget -qO- --user=admin --password=admin --referer http://192.168.0.1 --user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:21.0) Gecko/20100101 Firefox/21.0" http://192.168.0.1/userRpm/SysRebootRpm.htm?Reboot=Reboot
``` | 847 |
55,603,451 | I am trying to make a program that analyzes stocks, and right now I wrote a simple python script to plot moving averages. Extracting the CSV file from the native path works fine, but when I get it from the web, it doesn't work. Keeps displaying an error: 'list' object has no attribute 'Date'
It worked fine with .CSV, but the web thing is messed up.
If I run print(df), it displays the table really weirdly.
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
df = pd.read_html("https://finance.yahoo.com/quote/AAPL/history?period1=1428469200&period2=1554699600&interval=1d&filter=history&frequency=1d")
x = df.Date
y = df.Close
a = df['Close'].rolling(50, min_periods=50).mean()
b = df['Close'].rolling(200, min_periods=200).mean()
plt.plot(x, y)
plt.plot(a)
plt.plot(b)
plt.savefig("AAPL Stuff")
```
I ran in Jupyter Notebook.
I expected the output out[1] an image of the chart, but I got the error:
```
AttributeError Traceback (most recent call last)
<ipython-input-18-d97fbde31cef> in <module>
4
5 df = pd.read_html("https://finance.yahoo.com/quote/AAPL/history?period1=1428469200&period2=1554699600&interval=1d&filter=history&frequency=1d")
----> 6 x = df.Date
7 y = df.Close
8
AttributeError: 'list' object has no attribute 'Date'
``` | 2019/04/10 | [
"https://Stackoverflow.com/questions/55603451",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11337553/"
] | The data got placed in a (one-element) list.
If you do this, after the `read_html` call, it should work:
```
df = df[0]
``` | Did you mean to access the Date feature from the DataFrame object?
If that is the case, then change:
`python x = df.Date` to `python x = df['Date']`
`python y = df.Close` to `python y = df['Close']`
EDIT:
Also: `python df.plot(x='Date', y='Close', style='o')` works instead of plt.plot | 848 |
3,949,727 | For code:
```
#!/usr/bin/python
src = """
print '!!!'
import os
"""
obj = compile(src, '', 'exec')
eval(obj, {'__builtins__': False})
```
I get output:
```
!!!
Traceback (most recent call last):
File "./test.py", line 9, in <module>
eval(obj, {'__builtins__': False})
File "", line 3, in <module>
ImportError: __import__ not found
```
Both 'print' and 'import' are language construct. Why does 'eval' restrict using of 'import' but doesn't restrict 'print'?
P.S. I'm using python 2.6
UPDATE: Question is not "Why does import not work?" but "Why does print work?" Are there some architecture restrictions or something else? | 2010/10/16 | [
"https://Stackoverflow.com/questions/3949727",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23712/"
] | The `__import__` method is invoked by the `import` keyword: [python.org](http://docs.python.org/library/functions.html?highlight=import#__import__)
If you want to be able to import a module you need to leave the `__import__` method in the builtins:
```
src = """
print '!!!'
import os
"""
obj = compile(src, '', 'exec')
eval(obj, {'__builtins__': {'__import__':__builtins__.__import__}})
``` | In your `eval` the call to `import` is made successfully however `import` makes use of the `__import__` method in builtins which you have made unavailable in your `exec`. This is the reason why you are seeing
```
ImportError: __import__ not found
```
`print` doesn't depend on any builtins so works OK.
You could pass just `__import__` from builtins with something like:
```
eval(obj, {'__builtins__' : {'__import__' :__builtins__.__import__}})
``` | 849 |
26,797,378 | I'm developing an aplication built on the python Bottle framework. It's amazing and i want to host it somewhere, but of course i need a service where i can run the server.
What are the best options available (on costs too)?
Thanks! | 2014/11/07 | [
"https://Stackoverflow.com/questions/26797378",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2344954/"
] | I work for [PythonAnywhere](http://www.pythonanywhere.com/), a Python hosting company, so obviously I think we have a great Bottle hosting solution -- check it out :-)
Some of our competitors -- ones where we feel proud if a customer leaves them to move to us, or where we don't feel too depressed if they leave us to go to them, are Webfaction, Heroku, and (if you don't mind doing OS maintenance yourself and setting up your own front-end webserver) Digital Ocean and Amazon Web Services.
There are definitely other good hosts, but those are the ones that spring to mind. | The easiest way to host is use Ngrok ,its an open source tool where use get a temporary link where you can access your local host server on any machine through that link .you can download it from here <https://ngrok.com/> | 854 |
12,667,057 | I want my Python program to be deterministic, so I have been using `OrderedDicts` extensively throughout the code. Unfortunately, while debugging memory leaks today, I discovered that OrderedDicts have a custom `__del__` method, making them uncollectable whenever there's a cycle. It's rather unfortunate that there's no warning in the documentation about this.
So what can I do? Is there any deterministic dictionary in the Python standard library that plays nicely with gc? I'd really hate to have to roll my own, especially over a stupid one line function like this.
Also, is this something I should file a bug report for? I'm not familiar with the Python library's procedures, and what they consider a bug.
Edit: [It appears that this is a known bug that was fixed back in 2010. I must have somehow gotten a really old version of 2.7 installed.](http://bugs.python.org/issue9825) I guess the best approach is to just include a monkey patch in case the user happens to be running a broken version like me. | 2012/10/01 | [
"https://Stackoverflow.com/questions/12667057",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1420279/"
] | If the presence of the \_\_del\_\_ method is problematic for you, just remove it:
```
>>> import collections
>>> del collections.OrderedDict.__del__
```
You will gain the ability to use OrderedDicts in a reference cycle. You will lose having the OrderedDict free all its resources immediately upon deletion. | It sounds like you've tracked down a bug in `OrderedDict` that was fixed at some point after your version of 2.7. If it wasn't in any actual released versions, maybe you can just ignore it. But otherwise, yeah, you need a workaround.
I would suggest that, instead of monkeypatching `collections.OrderedDict`, you should instead use the [Equivalent OrderedDict recipe that runs on Python 2.4 or later](http://code.activestate.com/recipes/576693/) linked in [the documentation](http://docs.python.org/library/collections.html#collections.OrderedDict) for `collections.OrderedDict` (which does not have the excess `__del__`). If nothing else, when someone comes along and says "I need to run this on 2.6, how much work is it to port" the answer will be "a little less"…
But two more points:
>
> rewriting everything to avoid cycles is a huge amount of effort.
>
>
>
The fact that you've got cycles in your dictionaries is a red flag that you're doing something wrong (typically using strong refs for a cache or for back-pointers), which is likely to lead to other memory problems, and possibly other bugs. So that effort may turn out to be necessary anyway.
You still haven't explained what you're trying to accomplish; I suspect the "deterministic" thing is just a red herring (especially since `dict`s actually are deterministic), so the best solution is `s/OrderedDict/dict/g`.
But if determinism is necessary, you can't depend on the cycle collector, because it's not deterministic, and that means your finalizer ordering and so on all become non-deterministic. It also means your memory usage is non-deterministic—you may end up with a program that stays within your desired memory bounds 99.999% of the time, but not 100%; if those bounds are critically important, that can be worse than failing every time.
Meanwhile, the iteration order of dictionaries isn't specified, but in practice, CPython and PyPy iterate in the order of the hash buckets, not the id (memory location) of either the value or the key, and whatever Jython and IronPython do (they may be using some underlying Java or .NET collection that has different behavior; I haven't tested), it's unlikely that the memory order of the keys would be relevant. (How could you efficiently iterate a hash table based on something like that?) You may have confused yourself by testing with objects that use `id` for `hash`, but most objects hash based on value.
For example, take this simple program:
```
d={}
d[0] = 0
d[1] = 1
d[2] = 2
for k in d:
print(k, d[k], id(k), id(d[k]), hash(k))
```
If you run it repeatedly with CPython 2.7, CPython 3.2, and PyPy 1.9, the keys will always be iterated in order 0, 1, 2. The `id` columns may *also* be the same each time (that depends on your platform), but you can fix that in a number of ways—insert in a different order, reverse the order of the values, use string values instead of ints, assign the values to variables and then insert those variables instead of the literals, etc. Play with it enough and you can get every possible order for the `id` columns, and yet the keys are still iterated in the same order every time.
The order of iteration is not *predictable*, because to predict it you need the function for converting `hash(k)` into a bucket index, which depends on information you don't have access to from Python. Even if it's just `hash(k) % self._table_size`, unless that `_table_size` is exposed to the Python interface, it's not helpful. (It's a complex function of the sequence of inserts and deletes that could in principle be calculated, but in practice it's silly to try.)
But it is *deterministic*; if you insert and delete the same keys in the same order every time, the iteration order will be the same every time. | 855 |
63,336,512 | I have a python flask application which uses tabula internally to extract tables from pdf files.After I do 'cf push' and run the application on PCF,i load the pdf file to the application to read the table. When the app tries to extract the tabular data,I get the below error.
```
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] [2020-08-10 08:08:40,134] ERROR in app: Exception on / [POST]
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] Traceback (most recent call last):
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/tabula/io.py", line 80, in _run
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] result = subprocess.run(
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/subprocess.py", line 489, in run
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] with Popen(*popenargs, **kwargs) as process:
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/subprocess.py", line 854, in __init__
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] self._execute_child(args, executable, preexec_fn, close_fds,
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/subprocess.py", line 1702, in _execute_child
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] raise child_exception_type(errno_num, err_msg, err_filename)
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] FileNotFoundError: [Errno 2] No such file or directory: 'java'
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] During handling of the above exception, another exception occurred:
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] Traceback (most recent call last):
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/flask/app.py", line 2446, in wsgi_app
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] response = self.full_dispatch_request()
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/flask/app.py", line 1951, in full_dispatch_request
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] rv = self.handle_user_exception(e)
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/flask/app.py", line 1820, in handle_user_exception
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] reraise(exc_type, exc_value, tb)
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] raise value
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/flask/app.py", line 1949, in full_dispatch_request
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] rv = self.dispatch_request()
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/flask/app.py", line 1935, in dispatch_request
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] return self.view_functions[rule.endpoint](**req.view_args)
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "app.py", line 55, in index
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] wireListDF = pdfExtractorOBJ.getWireListDataFrame()
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/app/WireHarnessPDFExtractor.py", line 158, in getWireListDataFrame
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] self.readBTPPDF()
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/app/WireHarnessPDFExtractor.py", line 31, in readBTPPDF
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] df = tabula.read_pdf(self.pdf_path, pages='all', stream=True ,guess=True, encoding="utf-8",
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/tabula/io.py", line 322, in read_pdf
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] output = _run(java_options, kwargs, path, encoding)
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] File "/home/vcap/deps/0/python/lib/python3.8/site-packages/tabula/io.py", line 91, in _run
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] raise JavaNotFoundError(JAVA_NOT_FOUND_ERROR)
2020-08-10T13:38:40.135+05:30 [APP/PROC/WEB/0] [ERR] tabula.errors.JavaNotFoundError: `java` command is not found from this Python process.Please ensure Java is installed and PATH is set for `java`
2020-08-10T13:38:40.136+05:30 [APP/PROC/WEB/0] [ERR] 10.255.10.112 - - [10/Aug/2020 08:08:40] "[35m[1mPOST / HTTP/1.1[0m" 500 -
```
I know tabula has java dependencies, any suggestions on how to have the setup for the python flask application with tabula so that it can be used on PCF platform. | 2020/08/10 | [
"https://Stackoverflow.com/questions/63336512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12403005/"
] | This is a java path error. Your python runtime is not able to find java at all. You need to ensure that your export java in your export path variables. If you are running this process on linux, you can export `export PATH=<your java bin dir>:$PATH` | The highlights:
* You need multiple buildpacks, one for Java and one for Python
* You want to use apt-buildpack, not the Java buildpack though
* You need to set PATH to point to the location where the apt-buildpack installs Java (or have your app look for Java in this specific place)
* You can set PATH in a `.profile` file.
All of this is explained in [my answer to this similar question](https://stackoverflow.com/a/62927340/1585136). | 858 |
27,718,277 | Well I have an assignment to implement DES and I chose python, only problem is I can't figure out how to XOR bits of a String or Byte String, I can manually XOR them if only I can manage to read the 1s and 0s in them.
Example:
```
s1 = b'abc'
s2 = b'efg'
s3 = XOR(s1,s2) // my own method
```
How can I XOR them or how can I get the binary values of 1s and 0s that represent them?
If you use any python methods explain them, I'm relatively new to this language. | 2014/12/31 | [
"https://Stackoverflow.com/questions/27718277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3417451/"
] | First you need to `zip` your strings then use `ord` (in `python 2`) and `^` for each of characters :
```
>>> s1 = b'abc'
>>> s2 = b'efg'
>>> ''.join(chr(ord(i)^ord(j)) for i,j in zip(s1,s2))
'\x04\x04\x04'
```
the [`ord()`](https://docs.python.org/2/library/functions.html#ord) function retuen value of the byte when the argument is an 8-bit string.But if you are using `python 3` you dont need `ord` :
```
>>> ''.join(chr(i^j) for i,j in zip(s1,s2))
'\x04\x04\x04'
```
>
> Since bytes objects are sequences of integers (akin to a tuple), for a bytes object b, `b[0]` will be an integer, while `b[0:1]` will be a bytes object of length 1. (This contrasts with text strings, where both indexing and slicing will produce a string of length 1)
>
>
>
```
example :
>>> s1[0]
97
>>> s1[0:1]
b'a'
```
---
and if you want to convert back your strings you need to firs convert the `XOR`ed string to binary you can do it by `binascii.a2b_qp` function :
```
>>> import binascii
>>> s=''.join(chr(i^j) for i,j in zip(s1,s2))
>>> s4=binascii.a2b_qp(s)
>>> ''.join(chr(i^j) for i,j in zip(s1,s4))
'efg'
``` | ```
>>> b''.join(chr(ord(a) ^ ord(b)) for a, b in zip(b'abc', b'efg'))
'\x04\x04\x04'
``` | 859 |
13,768,118 | I'm building a python app using the UPS Shipping API. On sending the request (see below) I keep getting the following error:
```
UPS Error 9370701: Invalid processing option.
```
I'm not sure what this means and there isn't much more info in the API documentation. Could someone help me figure out what's going wrong here or give some more information about the cause of this error.
```
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:ns0="http://www.ups.com/XMLSchema/XOLTWS/Common/v1.0" xmlns:ns1="http://schemas.xmlsoap.org/soap/envelope/" xmlns:security="http://www.ups.com/XMLSchema/XOLTWS/UPSS/v1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns2="http://www.ups.com/XMLSchema/XOLTWS/FreightShip/v1.0" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header>
<security:UPSSecurity>
<security:UsernameToken>
<security:Username>winkerVSbecks</security:Username>
<security:Password>myPassword</security:Password>
</security:UsernameToken>
<security:ServiceAccessToken>
<security:AccessLicenseNumber>myLicenseNumber</security:AccessLicenseNumber>
</security:ServiceAccessToken>
</security:UPSSecurity>
</SOAP-ENV:Header>
<ns1:Body>
<ns2:FreightShipRequest>
<ns2:Request>
<ns0:RequestOption>1</ns0:RequestOption>
<ns0:RequestOption>Shipping</ns0:RequestOption>
</ns2:Request>
<ns2:Shipment>
<ns2:ShipFrom>
<ns2:Name>Adobe</ns2:Name>
<ns2:Address>
<ns2:AddressLine>560 Front St. W</ns2:AddressLine>
<ns2:AddressLine></ns2:AddressLine>
<ns2:City>Toronto</ns2:City>
<ns2:StateProvinceCode>ON</ns2:StateProvinceCode>
<ns2:PostalCode>M5V1C1</ns2:PostalCode>
<ns2:CountryCode>CA</ns2:CountryCode>
</ns2:Address>
<ns2:Phone>
<ns2:Number>6478340000</ns2:Number>
</ns2:Phone>
</ns2:ShipFrom>
<ns2:ShipperNumber>535T8T</ns2:ShipperNumber>
<ns2:ShipTo>
<ns2:Name>Apple</ns2:Name>
<ns2:Address>
<ns2:AddressLine>313 Richmond St. E</ns2:AddressLine>
<ns2:AddressLine></ns2:AddressLine>
<ns2:City>Toronto</ns2:City>
<ns2:StateProvinceCode>ON</ns2:StateProvinceCode>
<ns2:PostalCode>M5V4S7</ns2:PostalCode>
<ns2:CountryCode>CA</ns2:CountryCode>
</ns2:Address>
<ns2:Phone>
<ns2:Number>4164530000</ns2:Number>
</ns2:Phone>
</ns2:ShipTo>
<ns2:PaymentInformation>
<ns2:Payer>
<ns2:Name>Spiderman</ns2:Name>
<ns2:Address>
<ns2:AddressLine>560 Front St. W</ns2:AddressLine>
<ns2:City>Toronto</ns2:City>
<ns2:StateProvinceCode>ON</ns2:StateProvinceCode>
<ns2:PostalCode>M5V1C1</ns2:PostalCode>
<ns2:CountryCode>CA</ns2:CountryCode>
</ns2:Address>
<ns2:ShipperNumber>535T8T</ns2:ShipperNumber>
<ns2:AttentionName>He-Man</ns2:AttentionName>
<ns2:Phone>
<ns2:Number>6478343039</ns2:Number>
</ns2:Phone>
</ns2:Payer>
<ns2:ShipmentBillingOption>
<ns2:Code>10</ns2:Code>
</ns2:ShipmentBillingOption>
</ns2:PaymentInformation>
<ns2:Service>
<ns2:Code>308</ns2:Code>
</ns2:Service>
<ns2:HandlingUnitOne>
<ns2:Quantity>16</ns2:Quantity>
<ns2:Type>
<ns2:Code>PLT</ns2:Code>
</ns2:Type>
</ns2:HandlingUnitOne>
<ns2:Commodity>
<ns2:CommodityID>22</ns2:CommodityID>
<ns2:Description>These are some fancy widgets!</ns2:Description>
<ns2:Weight>
<ns2:UnitOfMeasurement>
<ns2:Code>LBS</ns2:Code>
</ns2:UnitOfMeasurement>
<ns2:Value>511.25</ns2:Value>
</ns2:Weight>
<ns2:Dimensions>
<ns2:UnitOfMeasurement>
<ns2:Code>IN</ns2:Code>
</ns2:UnitOfMeasurement>
<ns2:Length>1.25</ns2:Length>
<ns2:Width>1.2</ns2:Width>
<ns2:Height>5</ns2:Height>
</ns2:Dimensions>
<ns2:NumberOfPieces>1</ns2:NumberOfPieces>
<ns2:PackagingType>
<ns2:Code>PLT</ns2:Code>
</ns2:PackagingType>
<ns2:CommodityValue>
<ns2:CurrencyCode>USD</ns2:CurrencyCode>
<ns2:MonetaryValue>265.2</ns2:MonetaryValue>
</ns2:CommodityValue>
<ns2:FreightClass>60</ns2:FreightClass>
<ns2:NMFCCommodityCode>566</ns2:NMFCCommodityCode>
</ns2:Commodity>
<ns2:Reference>
<ns2:Number>
<ns2:Code>PM</ns2:Code>
<ns2:Value>1651651616</ns2:Value>
</ns2:Number>
<ns2:NumberOfCartons>5</ns2:NumberOfCartons>
<ns2:Weight>
<ns2:UnitOfMeasurement>
<ns2:Code>LBS</ns2:Code>
</ns2:UnitOfMeasurement>
<ns2:Value>2</ns2:Value>
</ns2:Weight>
</ns2:Reference>
</ns2:Shipment>
</ns2:FreightShipRequest>
</ns1:Body>
</SOAP-ENV:Envelope>
``` | 2012/12/07 | [
"https://Stackoverflow.com/questions/13768118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1365008/"
] | Try this
```
DirectoryInfo dir = new DirectoryInfo(Path.GetFullPath(fp));
lb_Files.Items.Clear();
foreach (FileInfo file in dir.GetFiles())
{
lb_Files.Items.Add(new RadListBoxItem(file.ToString(), file.ToString()));
}
``` | No you cannot cast a `String` object into a `RadListBoxItem`, you must create a `RadListBoxItem` using that string as your Value and Text properties:
So replace this:
```
RadListBoxItem rlb = new RadListBoxItem();
rlb = (RadListBoxItem)file.ToString();
//radListBox
lb_Files.Items.Add(rlb.ToString());
```
With this:
```
lb_Files.Items.Add(new RadListBoxItem
{
Value = file.ToString(),
Text = file.ToString()
});
``` | 862 |
2,100,233 | I have a javascript which takes two variables i.e two lists one is a list of numbers and the other list of strings from django/python
```
numbersvar = [0,1,2,3]
stringsvar = ['a','b','c']
```
The numbersvar is rendered perfectly but when I do {{stringsvar}} it does not render it. | 2010/01/20 | [
"https://Stackoverflow.com/questions/2100233",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/228741/"
] | Maybe it will be better to use a **[json](http://www.google.com/url?sa=t&source=web&ct=res&cd=1&ved=0CDkQFjAA&url=http%3A%2F%2Fdocs.python.org%2Flibrary%2Fjson.html&ei=b9pWS7mmO8eK_Abcppz-Aw&usg=AFQjCNG7_WS8ga_dF7-eGjquJarNhC5Eew&sig2=_SZrRNdbAGmbGuW9tVYgHw)** module to create a javascript lists?
```
>>> a = ['stste', 'setset', 'serthjsetj']
>>> b = json.dumps(a)
>>> b
'["stste", "setset", "serthjsetj"]'
>>> json.loads(b)
[u'stste', u'setset', u'serthjsetj']
``` | What does stringsvar contain? The list, or the string representation of the list?
I suggest you pass the correct javascript string representation of the list from the view method to the template to render. Python and javascript array literals have the same syntax, so you could do:
```
def my_view(request):
return render_template("...", stringsvar=str(the_list))
```
And in the template:
```
<script language="javascript">
stringsvar = {{ stringsvar }};
...
</script>
```
Or you can use the json serializer, this has the added benefit that you will be able to pass other kinds of values as well.
```
from django.core import serializers
def my_view(request):
return render_template("...", stringsvar=serializers.serialize("json", the_list))
``` | 863 |
10,393,385 | >
> **Possible Duplicate:**
>
> [Instance variables vs. class variables in Python](https://stackoverflow.com/questions/2714573/instance-variables-vs-class-variables-in-python)
>
>
>
What is the difference between these two situations and how is it treated with in Python?
Ex1
```
class MyClass:
anArray = {}
```
Ex2
```
class MyClass:
__init__(self):
self.anArray = {}
```
It seems like the in the first example the array is being treated like a static variable. How does Python treat this and what is the reason for this? | 2012/05/01 | [
"https://Stackoverflow.com/questions/10393385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/751467/"
] | In the first example, `anArray` (which in Python is called a dictionary, not an array) is a class attribute. It can be accessed using `MyClass.anArray`. It exists as soon as the class is defined.
In the second example, `anArray` is an instance attribute. It can be accessed using `MyClass().anArray`. (But note that doing that just throws away the `MyClass` instance created; a more sensible example is `mc = MyClass(); mc.anArray['a'] = 5`.) It doesn't exist until an instance of the class is created. | It is declared diffrent area.
Ex1 is Like global or static variable.
```
obj = MyClass()
obj2 = MyClass()
print "IS one instance ", id(obj.anArray) == id(obj2.anArray)
```
Ex2 is local attribute.
```
obj = MyClass()
obj2 = MyClass()
print "IS one instance ", id(obj.anArray) == id(obj2.anArray)
``` | 864 |
46,374,747 | it's kind of very daunting now. I've tried all I could possibly figure out, to no avail.
I am using ElementaryOS Loki, based on Ubuntu 16.04 LTS.
I have `boost 1.65.1` installed under `/usr/local`
I am using `cmake 3.9.3` which is supporting building boost 1.65.0 and forward.
I have tried every possible way to mess with my `CMakeLists.txt`, which as of now, looks like this
```
cmake_minimum_required( VERSION 2.8 FATAL_ERROR )
project( boostpythondemo )
set( Boost_DEBUG ON )
MESSAGE("Boost Debugging is on.")
set( Boost_NO_SYSTEM_PATHS TRUE )
if( Boost_NO_SYSTEM_PATHS)
set( BOOST_ROOT "/usr/local/boost_1_65_1" )
set( BOOST_INCLUDEDIR "/usr/local/boost_1_65_1/boost" )
set( BOOST_LIBRARYDIR "/usr/local/boost_1_65_1/stage/lib" )
endif( Boost_NO_SYSTEM_PATHS )
find_package( PythonLibs 3.6 REQUIRED )
include_directories( ${PYTHON_INCLUDE_DIRS} )
find_package( Boost COMPONENTS python REQUIRED )
if( Boost_FOUND )
MESSAGE("******************************BOOST FOUND*******************")
endif( Boost_FOUND )
include_directories( ${Boost_INCLUDE_DIRS} )
link_directories( ${Boost_LIBRARIES} )
add_library( heyall SHARED heyall.cpp )
add_library( heyall_ext SHARED heyall_ext.cpp )
target_link_libraries( heyall_ext ${BOOST_LIBRARIES} heyall )
set_target_properties( heyall_ext PROPERTIES PREFIX "" )
```
from the [command line output](https://codeshare.io/5M9AzJ) I can see I am setting the boost variables to the correct locations.
However, cmake just can't find boost\_python. I really can't figure out what's going on now. the line says "BOOST FOUND" never got printed.
here is also the full [cmake output log](https://gist.github.com/stucash/5297f5c03fb447ab89cb119b25e39979).
I built boost with python 3.6.2 which will be used to build boost\_python as well, so this way I can use python 3 against boost\_python.
Anyone has bumped into this before? | 2017/09/23 | [
"https://Stackoverflow.com/questions/46374747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4436572/"
] | Thanks @JohnZwinck for pointing out the obvious over-looked error I had and @James for sharing his answer. but it seems his answer is for Boost 1.63.0, so I wanted to post a solution here so anyone who's having problem with latest CMAKE and Boost Python (up to today) can save some head scratching time.
some prep work first, so go ahead download CMAKE 3.9.3, please beware that if you are using Boost 1.65.0 or above, you will need to use at least CMAKE 3.9.3, CMAKE explicitly bundles with different Boost versions and 3.9.3 is the one shipped with 1.65.0 or above.
Otherwise, you may get an error from CMAKE saying
>
> imported targets not available for boost version
>
>
>
**Install Boost 1.65.1 (with python3.6.2)**
download Boost 1.65.1 and extract it under `/usr/local`
you can just follow Boost official guide (getting started guide) to install boost with python2, it should be hassle-free.
but to install boost with python3, you will firstly need to add a user-config.jam file and specify the python version you want to use to build Boost (Boost Python). You will need to specify the parameter on command line like James did (`./bootstrap --with-python=Python3`), and add an user-config.jam in your home directory.
firstly, you should create a user-config.jam under your `/home/$USERNAME/` (a subdirectory of `/home`). You can specify your compiler (gcc, clang, etc), and some other stuff, and for us it is the python version.
to create a user-config.jam, you can do
`$ sudo cp /usr/local/boost_1_65_1/tools/build/example/user-config.jam $HOME/user-config.jam`
inside your user-config.jam file, add this line:
`using python : 3.6 : /usr/bin/python3.6 : /usr/include/python3.6 : /usr/lib ;`
replace with your python 3 version.
now we are building and installing Boost 1.65.1
`$ ./bootstrap.sh --prefix=/usr/local --with-python=python3`
`$ ./b2 --install -j 8 # build boost in parallel using all cores available`
once it's finished make sure you in your `.profile` add:
`export INCLUDE="/usr/local/include/boost:$INCLUDE"`
`export LIBRARY_PATH="/usr/local/lib:$LIBRARY_PATH"`
`export LD_LIBRARY_PATH="/usr/local/lib:$LD_LIBRARY_PATH"`
**Setting up CMAKELists.txt**
The one in the question details works just fine; but once you have followed the above steps, a simple CMAKELists.txt like below should suffice.
```
cmake_minimum_required( VERSION 2.8 FATAL_ERROR )
project( boostpythondemo )
find_package( PythonLibs 3.6 REQUIRED )
include_directories( ${PYTHON_INCLUDE_DIRS} )
find_package( Boost COMPONENTS python3 REQUIRED )
if( Boost_FOUND )
MESSAGE("********************************FOUND BOOST***********************")
endif( Boost_FOUND )
include_directories( ${Boost_INCLUDE_DIRS} )
link_directories( ${Boost_LIBRARIES} )
add_library( heyall SHARED heyall.cpp )
add_library( heyall_ext SHARED heyall_ext.cpp )
target_link_libraries( heyall_ext ${BOOST_LIBRARIES} heyall )
set_target_properties( heyall_ext PROPERTIES PREFIX "" )
```
Apparently the BOOST\_FOUND message was for debugging you can safely remove it.
now you should just go ahead build using `cmake` & `make`. | There are some dependencies for both CMake and Boost, so I am removing my old answer and providing a link to the bash script on GitHubGist.
The script can be found [here](https://gist.github.com/JamesKBowler/24228a401230c0279d9d966a18abc9e6)
To run the script first make it executable
```
chmod +x boost_python3_install.sh
```
then run with sudo.
```
sudo ./boost_python3_install.sh
```
enjoy! | 865 |
32,462,512 | I'm trying to create a simple markdown to latex converter, just to learn python and basic regex, but I'm stuck trying to figure out why the below code doesn't work:
```
re.sub (r'\[\*\](.*?)\[\*\]: ?(.*?)$', r'\\footnote{\2}\1', s, flags=re.MULTILINE|re.DOTALL)
```
I want to convert something like:
```
s = """This is a note[*] and this is another[*]
[*]: some text
[*]: other text"""
```
to:
```
This is a note\footnote{some text} and this is another\footnote{other text}
```
this is what I got (from using my regex above):
```
This is a note\footnote{some text} and this is another[*]
[*]: note 2
```
Why is the pattern only been matched once?
EDIT:
-----
I tried the following lookahead assertion:
```
re.sub(r'\[\*\](?!:)(?=.+?\[\*\]: ?(.+?)$',r'\\footnote{\1}',flags=re.DOTALL|re.MULTILINE)
#(?!:) is to prevent [*]: to be matched
```
now it matches all the footnotes, however they're not matched correctly.
```
s = """This is a note[*] and this is another[*]
[*]: some text
[*]: other text"""
```
is giving me
```
This is a note\footnote{some text} and this is another\footnote{some text}
[*]: note 1
[*]: note 2
```
Any thoughts about it? | 2015/09/08 | [
"https://Stackoverflow.com/questions/32462512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4699624/"
] | Just lost a week trying to find a suitable tool for Neo4J. It has somehow gotten more difficult. My experience updated from the last post here (2015):
Gephi:
2015: Supported Neo4j
2017: Doesn't support Neo4j
Linxurious:
2015: Free
2017: Discontinued and doesn't list the price
Neoclipse:
2017: No updates since 2014. Doesn't work with the current version of Neo4J.
Structr:
Looks promising, but requires a lot of Java knowledge just to get it running. Have lost days on this and still have not successfully installed.
It does not look good for Neo4J tools. It was actually much better 2 years ago. | There are at least 3 GUI tools for neo4j that allow editing:
* [neoclipse](https://github.com/neo4j-contrib/neoclipse/wiki)
* [Gephi](http://gephi.github.io/)
* [linkurious](http://linkurio.us/)
`neoclipse` and `Gephi` are open source and free. `linkurous` has a free open-source community edition. | 866 |
63,756,753 | I need to be able to run python code on each "node" of the network so that I can test out the code properly. I can't use different port numbers and run the code since I need to handle various other things which kind of force using unique IP addresses. | 2020/09/05 | [
"https://Stackoverflow.com/questions/63756753",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6740018/"
] | In my DHT p2p project, I have a specific object that abstract the network communication. During testing I mock that object with an object that operate in memory:
```
class MockProtocol:
def __init__(self, network, peer):
self.network = network
self.peer = peer
async def rpc(self, address, name, *args):
peer = self.network.peers[address[0]]
proc = getattr(peer, name)
start = time()
out = await proc((self.peer._uid, None), *args)
delta = time() - start
assert delta < 5, "RPCProtocol allows 5s delay only"
return out
class MockNetwork:
def __init__(self):
self.peers = dict()
def add(self, peer):
peer._protocol = MockProtocol(self, peer)
self.peers[peer._uid] = peer
def choice(self):
return random.choice(list(self.peers.values()))
async def simple_network():
network = MockNetwork()
for i in range(5):
peer = make_peer()
network.add(peer)
bootstrap = peer
for peer in network.peers.values():
await peer.bootstrap((bootstrap._uid, None))
for peer in network.peers.values():
await peer.bootstrap((bootstrap._uid, None))
# run connect, this simulate the peers connecting to an existing
# network.
for peer in network.peers.values():
await peer.connect()
return network
@pytest.mark.asyncio
async def test_dict(make_network):
network = await make_network()
# setup
value = b'test value'
key = peer.hash(value)
# make network and peers
one = network.choice()
two = network.choice()
three = network.choice()
four = network.choice()
# exec
out = await three.set(value)
# check
assert out == key
fallback = list()
for xxx in (one, two, three, four):
try:
out = await xxx.get(key)
except KeyError:
fallback.append(xxx)
else:
assert out == value
for xxx in fallback:
log.warning('fallback for peer %r', xxx)
out = await xxx.get_at(key, three._uid)
assert out == value
``` | I think vmware or virtual box can help you. | 872 |
62,813,690 | I am writing a script which will poll Jenkins plugin API to fetch a list of plugin dependencies. For this I have used `requests` module of python. It keeps returning empty response, whereas I am getting a JSON response in Postman.
```
import requests
def get_deps():
url = "https://plugins.jenkins.io/api/plugin/CFLint"
headers = {
"Connection": "keep-alive",
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate, br"
}
reqs = requests.get(url, headers)
return reqs.status_code
return reqs.json()
get_deps()
```
[![Postman_Result](https://i.stack.imgur.com/xQKMG.png)](https://i.stack.imgur.com/xQKMG.png)
The output is as follows.
```
C:\Users\krisT\eclipse-workspace\jenkins>python jenkins.py
C:\Users\krisT\eclipse-workspace\jenkins>
```
Where am I making a mistake? Everything looks correct to me.
---
**Instead of return I had to save the response to a variable and print the response. My question felt like a noob.**
```
s = requests.Session()
def get_deps():
url = "https://plugins.jenkins.io/api/plugin/CFLint"
reqs = s.get(url)
res = reqs.json()
print(res)
get_deps()
``` | 2020/07/09 | [
"https://Stackoverflow.com/questions/62813690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5649739/"
] | what do you think abous this code : First i calculate the hash and send to server A for signature
```
PdfReader reader = new PdfReader(SRC);
FileOutputStream os = new FileOutputStream(TEMP);
PdfStamper stamper = PdfStamper.createSignature(reader, os, '\0');
PdfSignatureAppearance appearance = stamper.getSignatureAppearance();
appearance.setVisibleSignature(new Rectangle(36, 748, 144, 780), 1, "sig");
//appearance.setCertificate(chain[0]);
ExternalSignatureContainer external = new
ExternalBlankSignatureContainer(PdfName.ADOBE_PPKLITE, PdfName.ADBE_PKCS7_DETACHED);
MakeSignature.signExternalContainer(appearance, external, 8192);
InputStream inp = appearance.getRangeStream();
BouncyCastleDigest digest = new BouncyCastleDigest();
byte[] hash = DigestAlgorithms.digest(inp, digest.getMessageDigest("SHA256"));
System.out.println("hash to sign : "+ hash);
bytesToFile(hash, HASH);
byte[] hashdocumentByte = TEST.signed_hash(hash);
PdfReader reader2 = new PdfReader(TEMP);
FileOutputStream os2 = new FileOutputStream(DEST);
ExternalSignatureContainer external2 = new
MyExternalSignatureContainer(hashdocumentByte,null);
MakeSignature.signDeferred(reader2, "sig", os2, external2);
```
And in the server B where i sign the hash :
```
BouncyCastleProvider providerBC = new BouncyCastleProvider();
Security.addProvider(providerBC);
// we load our private key from the key store
KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType());
ks.load(new FileInputStream(CERTIFICATE), PIN);
String alias = (String)ks.aliases().nextElement();
Certificate[] chain = ks.getCertificateChain(alias);
PrivateKey pk = (PrivateKey) ks.getKey(alias, PIN);
PrivateKeySignature signature = new PrivateKeySignature(pk, "SHA256", null);
BouncyCastleDigest digest = new BouncyCastleDigest();
Calendar cal = Calendar.getInstance();
String hashAlgorithm = signature.getHashAlgorithm();
System.out.println(hashAlgorithm);
PdfPKCS7 sgn = new PdfPKCS7(null, chain, "SHA256", null, digest, false);
byte[] sh = sgn.getAuthenticatedAttributeBytes(hash, null, null, CryptoStandard.CMS);
byte[] extSignature = signature.sign(sh);
System.out.println(signature.getEncryptionAlgorithm());
sgn.setExternalDigest(extSignature, null, signature.getEncryptionAlgorithm());
return sgn.getEncodedPKCS7(hash, null, null, null, CryptoStandard.CMS);
``` | Your `signDocument` method apparently does not accept a pre-calculated hash value but seems to calculate the hash of the data you give it, in your case the (lower case) hex presentation of the hash value you already calculated.
In your first example document you have these values (all hashes are SHA256 hashes):
* Hash of the byte ranges to sign:
```
91A9F5EBC4F2ECEC819898824E00ECD9194C3E85E4410A3EFCF5193ED7739119
```
* Hash of `"91a9f5ebc4f2ecec819898824e00ecd9194c3e85e4410a3efcf5193ed7739119".getBytes()`:
```
2F37FE82F4F71770C2B33FB8787DE29627D7319EE77C6B5C48152F6E420A3242
```
* Hash value signed by the embedded signature container:
```
2F37FE82F4F71770C2B33FB8787DE29627D7319EE77C6B5C48152F6E420A3242
```
And in your first example document you have these values (all hashes also are SHA256 hashes):
* Hash of the byte ranges to sign:
```
79793C58489EB94A17C365445622B7F7945972A5A0BC4C93B6444BEDFFA5A5BB
```
* Hash of `"79793c58489eb94a17c365445622b7f7945972a5a0bc4c93b6444bedffa5a5bb".getBytes()`:
```
A8BCBC6F9619ECB950864BFDF41D1B5B7CD33D035AF95570C426CF4B0405949B
```
* Hash value signed by the embedded signature container:
```
A8BCBC6F9619ECB950864BFDF41D1B5B7CD33D035AF95570C426CF4B0405949B
```
Thus, you have to correct your `signDocument` method to interpret the data correctly, or you have to give it a byte array containing the whole range stream to digest. | 873 |
60,468,634 | I'm fairly new to python and am doing some basic code. I need to know if i can repeat my iteration if the answer is not yes or no. Here is the code (sorry to those of you that think that im doing bad habits). I need the iteration to repeat during else. (The function just outputs text at the moment)
```
if remove_char1_attr1 = 'yes':
char1_attr1.remove(min(char1_attr1))
char1_attr1_5 = random.randint(1,6)
char1_attr1.append(char1_attr1_5)
print("The numbers are now as follows: " +char1_attr1 )
elif remove_char1_attr1 = 'no'
break
else:
incorrect_response()
``` | 2020/02/29 | [
"https://Stackoverflow.com/questions/60468634",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12985589/"
] | Just put the code into a loop:
```
while True:
if remove_char1_attr1 = 'yes':
char1_attr1.remove(min(char1_attr1))
char1_attr1_5 = random.randint(1,6)
char1_attr1.append(char1_attr1_5)
print("The numbers are now as follows: " +char1_attr1 )
elif remove_char1_attr1 = 'no'
break
else:
#incorrect_response()
print("Incorrect")
```
then it will run till the `remove_char1_attr1` is "no" | You can try looping while it's not yes or no
```py
while remove_char1_attr1 not in ('yes', 'no'):
if remove_char1_attr1 = 'yes':
char1_attr1.remove(min(char1_attr1))
char1_attr1_5 = random.randint(1,6)
char1_attr1.append(char1_attr1_5)
print("The numbers are now as follows: " +char1_attr1 )
elif remove_char1_attr1 = 'no'
break
else:
incorrect_response()
``` | 874 |
52,415,096 | I am calling a new object to manage an Azure Resource and using the Azure python packages. While calling it, i get a maximum depth exceeded error however if I step through the code in a python shell I don't get this issue. Below is the **init** method
```
class WindowsDeployer(object):
def __init__(self, params):
try:
print("executes class init")
self.subscription_id = '{SUBSCRIPTION-ID}'
self.vmName = params["vmName"]
self.location = params["location"]
self.resource_group = "{}-rg".format(self.vmName)
print("sets variables")
# Error is in the below snippet, while calling ServicePrincipalCredentials
self.credentials = ServicePrincipalCredentials(
client_id='{CLIENT-ID}',
secret='{SECRET}',
tenant='{TENANT-ID}'
)
# Does not reach here...
print("creates a credential")
self.client = ResourceManagementClient(self.credentials, self.subscription_id)
```
Instead, it exits with the following message:
`maximum recursion depth exceeded`
I have tried to increase the recursion limit to 10000 and that has not solved the issue.
Pip freeze: `azure==4.0.0 azure-common==1.1.4 azure-mgmt==4.0.0`
Traceback:
```
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1985, in wsgi_app
response = self.handle_exception(e)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1540, in handle_exception
reraise(exc_type, exc_value, tb)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/_compat.py", line 33, in reraise
raise value
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/_compat.py", line 33, in reraise
raise value
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/valana/Projects/wolfinterface/Code/wolfinterface/app.py", line 87, in wrap
return f(*args, **kwargs)
File "/Users/valana/Projects/wolfinterface/Code/wolfinterface/app.py", line 134, in provision
return provision_page(request, session)
File "/Users/valana/Projects/wolfinterface/Code/wolfinterface/provision.py", line 104, in provision_page
deployer = WindowsDeployer(json.loads(params))
File "/Users/valana/Projects/wolfinterface/Code/wolfinterface/AzureProvision.py", line 30, in __init__
tenant='{TENNANT-ID}'
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/msrestazure/azure_active_directory.py", line 453, in __init__
self.set_token()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/msrestazure/azure_active_directory.py", line 478, in set_token
proxies=self.proxies)
File "/Users/valana/Library/Python/3.6/lib/python/site-packages/requests_oauthlib/oauth2_session.py", line 221, in fetch_token
verify=verify, proxies=proxies)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 555, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/Users/valana/Library/Python/3.6/lib/python/site-packages/requests_oauthlib/oauth2_session.py", line 360, in request
headers=headers, data=data, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/adapters.py", line 440, in send
timeout=timeout
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 601, in urlopen
chunked=chunked)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 346, in _make_request
self._validate_conn(conn)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 850, in _validate_conn
conn.connect()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connection.py", line 314, in connect
cert_reqs=resolve_cert_reqs(self.cert_reqs),
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/util/ssl_.py", line 269, in create_urllib3_context
context.options |= options
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 465, in options
super(SSLContext, SSLContext).options.__set__(self, value)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 465, in options
super(SSLContext, SSLContext).options.__set__(self, value)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 465, in options
```
The last line keeps going until it hits the recursion limit | 2018/09/19 | [
"https://Stackoverflow.com/questions/52415096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6781059/"
] | Thanks for the help above. The issue was with my gevent packages (not sure exactly what) however adding upgrading gevent and adding the following lines fixed it.
```
import gevent.monkey
gevent.monkey.patch_all()
``` | I had a similar problem when using the `azure-storage-blob` module, and adding the following lines fixed it. I do not know why. It makes me confused.
Exception:
>
> maximum recursion depth exceeded while calling a Python object
>
>
>
Solution:
```
import gevent.monkey
gevent.monkey.patch_all()
``` | 875 |
72,921,087 | Taking this command to start local server for example, the command includes -m, what is the meaning of -m in genearl?
```
python3 -m http.server
``` | 2022/07/09 | [
"https://Stackoverflow.com/questions/72921087",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4877535/"
] | From the documentation, which can be invoked using `python3 --help`.
```
-m mod : run library module as a script (terminates option list)
```
Instead of importing the module in another script (like `import <module-name>`), you directly run it as a script. | The -m stands for module-name in Python. | 876 |
27,793,025 | I can't use Java and Python at the same time.
When I set
```
%JAVAHOME%\bin; %PYTHONPATH%;
```
I can use java, but not python. When I set
```
%PYTHONPATH%; %JAVAHOME%\bin;
```
I can use python, but not java.
I'm using windows 7. How can I go about fixing this problem? | 2015/01/06 | [
"https://Stackoverflow.com/questions/27793025",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4422583/"
] | Don't put a space in your `PATH` entries
```
set "PATH=%JAVAHOME%\bin;%PYTHONPATH%;%PATH%"
``` | 1. Select Start, select Control Panel. double click System, and select the Advanced tab.
2. Click Environment Variables. In the section System Variables, find the PATH environment variable and select it. ...
3. In the Edit System Variable (or New System Variable) window, specify the value of the PATH environment variable.
for more use this link
<http://docs.oracle.com/javase/tutorial/essential/environment/paths.html> | 877 |
40,652,793 | I run a bash script with which start a python script to run in background
```
#!/bin/bash
python test.py &
```
So how i can i kill the script with bash script also?
I used the following command to kill but output `no process found`
```
killall $(ps aux | grep test.py | grep -v grep | awk '{ print $1 }')
```
I try to check the running processes by `ps aux | less` and found that the running script having command of `python test.py`
Please assist, thank you! | 2016/11/17 | [
"https://Stackoverflow.com/questions/40652793",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6180818/"
] | Use `pkill` command as
```
pkill -f test.py
```
(or) a more fool-proof way using `pgrep` to search for the actual process-id
```
kill $(pgrep -f 'python test.py')
```
Or if more than one instance of the running program is identified and all of them needs to be killed, use [killall(1)](https://linux.die.net/man/1/killall) on Linux and BSD
```
killall test.py
``` | You can use the `!` to get the PID of the last command.
I would suggest something similar to the following, that also check if the process you want to run is already running:
```
#!/bin/bash
if [[ ! -e /tmp/test.py.pid ]]; then # Check if the file already exists
python test.py & #+and if so do not run another process.
echo $! > /tmp/test.py.pid
else
echo -n "ERROR: The process is already running with pid "
cat /tmp/test.py.pid
echo
fi
```
Then, when you want to kill it:
```
#!/bin/bash
if [[ -e /tmp/test.py.pid ]]; then # If the file do not exists, then the
kill `cat /tmp/test.py.pid` #+the process is not running. Useless
rm /tmp/test.py.pid #+trying to kill it.
else
echo "test.py is not running"
fi
```
Of course if the killing must take place some time after the command has been launched, you can put everything in the same script:
```
#!/bin/bash
python test.py & # This does not check if the command
echo $! > /tmp/test.py.pid #+has already been executed. But,
#+would have problems if more than 1
sleep(<number_of_seconds_to_wait>) #+have been started since the pid file would.
#+be overwritten.
if [[ -e /tmp/test.py.pid ]]; then
kill `cat /tmp/test.py.pid`
else
echo "test.py is not running"
fi
```
If you want to be able to run more command with the same name simultaneously and be able to kill them selectively, a small edit of the script is needed. Tell me, I will try to help you!
With something like this you are sure you are killing what you want to kill. Commands like `pkill` or grepping the `ps aux` can be risky. | 880 |
25,065,017 | I'm learning objective c a little bit to write an iPad app. I've mostly done some html5/php projects and learned some python at university. But one thing that really blows my mind is how hard it is to just style some text in an objective C label.
Maybe I'm coming from a lazy markdown generation, but really, if I want to let an UILabel look like:
>
>
> >
> > **Objective:** Construct an *equilateral* triangle from the line segment AB.
> >
> >
> >
>
>
>
In markdown this is as simple as:
`**Objective:** Construct an *equilateral* triangle from the line segment AB.`
Is there really no pain free objective C way to do this ? All the tutorials I read really wanted me to write like 15 lines of code. For something as simple as this.
So my question is, what is the easiest way to do this, if you have a lot of styling to do in your app ? Will styling text become more natural with swift in iOS8 ? | 2014/07/31 | [
"https://Stackoverflow.com/questions/25065017",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2517546/"
] | You can use `NSAttributedString`'s `data:options:documentAttributes:error:` initializer (first available in iOS 7.0 SDK).
```
import UIKit
let htmlString = "<b>Objective</b>: Construct an <i>equilateral</i> triangle from the line segment AB."
let htmlData = htmlString.dataUsingEncoding(NSUTF8StringEncoding)
let options = [NSDocumentTypeDocumentAttribute: NSHTMLTextDocumentType]
var error : NSError? = nil
let attributedString = NSAttributedString(data: htmlData, options: options, documentAttributes: nil, error: &error)
if error == nil {
// we're good
}
```
***Note:*** You might also want to include `NSDefaultAttributesDocumentAttribute` option in the `options` dictionary to provide additional global styling (such as telling not to use Times New Roman).
Take a look into **[NSAttributedString UIKit Additions Reference](https://developer.apple.com/library/ios/documentation/uikit/reference/NSAttributedString_UIKit_Additions/Reference/Reference.html)** for more information. | I faced similar frustrations while trying to use attributed text in Xcode, so I feel your pain. You can definitely use multiple `NSMutableAttributedtext`'s to get the job done, but this is very rigid.
```
UIFont *normalFont = [UIFont fontWithName:@"..." size:20];
UIFont *boldFont = [UIFont fontWithName:@"..." size:20];
UIFont *italicizedFont = [UIFont fontWithName:@"..." size:20];
NSMutableAttributedString *total = [[NSMutableAttributedString alloc]init];
NSAttributedString *string1 = [[NSAttributedString alloc] initWithString:[NSString stringWithFormat:@"Objective"] attributes:@{NSFontAttributeName:boldFont}];
NSAttributedString *string2 = [[NSAttributedString alloc] initWithString:[NSString stringWithFormat:@": Construct an "] attributes:@{NSFontAttributeName:normalFont}];
NSAttributedString *string3 = [[NSAttributedString alloc] initWithString:[NSString stringWithFormat:@"equilateral "] attributes:@{NSFontAttributeName:italicizedFont}];
NSAttributedString *string4 = [[NSAttributedString alloc] initWithString:[NSString stringWithFormat:@"triangle from the line segment AB."] attributes:@{NSFontAttributeName:normalFont}];
[total appendAttributedString:string1];
[total appendAttributedString:string2];
[total appendAttributedString:string3];
[total appendAttributedString:string4];
[self.someLabel setAttributedText: total];
```
Another option is to use `NSRegularExpression`. While this will require more lines of code, it is a more fluid way of bolding, changing color, etc from an entire string at once. For your purposes however, using the `appendAttributedString` will be the shortest way with a label.
```
UIFont *normalFont = [UIFont fontWithName:@"..." size:20];
UIFont *boldFont = [UIFont fontWithFamilyName:@"..." size: 20];
UIFont *italicizedFont = [UIFont fontWithFamilyName:@"..." size: 20];
NSMutableAttributedString *attributedString = [[NSMutableAttributedString alloc] initWithString:[NSString stringWithFormat: @"Objective: Construct an equilateral triangle from the line segment AB."] attributes:@{NSFontAttributeName:normalFont}];
NSError *regexError;
NSRegularExpression *regex1 = [NSRegularExpression regularExpressionWithPattern:@"Objective"
options:NSRegularExpressionCaseInsensitive error:®exError];
NSRegularExpression *regex2 = [NSRegularExpression regularExpressionWithPattern:@"equilateral"
options:NSRegularExpressionCaseInsensitive error:®exError];
if (!regexError)
{
NSArray *matches1 = [regex1 matchesInString:[attributedString string]
options:0
range:NSMakeRange(0, [[attributedString string] length])];
NSArray *matches2 = [regex2 matchesInString:[attributedString string]
options:0
range:NSMakeRange(0, [[attributedString string] length])];
for (NSTextCheckingResult *aMatch in matches1)
{
NSRange matchRange = [aMatch range];
[attributedString setAttributes:@{NSFontAttributeName:boldFont}
range:matchRange];
}
for (NSTextCheckingResult *aMatch in matches2)
{
NSRange matchRange = [aMatch range];
[attributedString setAttributes:@{NSFontAttributeName:italicizedFont}
range:matchRange];
}
[self.someLabel setAttributedText: attributedString];
``` | 887 |
22,071,987 | I haven't been able to find a function to generate an array of random floats of a given length between a certain range.
I've looked at [Random sampling](http://docs.scipy.org/doc/numpy/reference/routines.random.html) but no function seems to do what I need.
[random.uniform](http://docs.python.org/2/library/random.html#random.uniform) comes close but it only returns a single element, not a specific number.
This is what I'm after:
```
ran_floats = some_function(low=0.5, high=13.3, size=50)
```
which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range `[0.5, 13.3]`.
Is there such a function? | 2014/02/27 | [
"https://Stackoverflow.com/questions/22071987",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1391441/"
] | [`np.random.uniform`](https://numpy.org/doc/stable/reference/random/generated/numpy.random.uniform.html) fits your use case:
```
sampl = np.random.uniform(low=0.5, high=13.3, size=(50,))
```
**Update Oct 2019:**
While the syntax is still supported, it looks like the API changed with NumPy 1.17 to support greater control over the random number generator. Going forward the API has changed and you should look at <https://docs.scipy.org/doc/numpy/reference/random/generated/numpy.random.Generator.uniform.html>
The enhancement proposal is here: <https://numpy.org/neps/nep-0019-rng-policy.html> | This is the simplest way
```
np.random.uniform(start,stop,(rows,columns))
``` | 890 |
8,964,461 | I would like to create a subclass of python's unittest.Testcase called BasicTest. I would like each subclass of BasicTest to run the same routine in main. How can I accomplish this?
Example:
```
in basic_test.py:
class BasicTest(unittest.TestCase):
...
if __name__ == '__main__':
# Do optparse stuff
unittest.main()
in some_basic_test.py:
class SomeBasicTest(BasicTest):
...
if __name__ == '__main__':
#call the main in basic_test.py
``` | 2012/01/22 | [
"https://Stackoverflow.com/questions/8964461",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/766953/"
] | ```
# basic_test.py
class BasicTest(unittest.TestCase):
@staticmethod
def main():
# Do optparse stuff
unittest.main()
if __name__ == '__main__':
BasicTest.main()
# some_basic_test.py
class SomeBasicTest(BasicTest):
...
if __name__ == '__main__':
BasicTest.main()
``` | You cannot (re)import a module as a new **main**, thus the `if __name__=="__main__"` code is kind of unreachable.
Dor’s suggestion or something similar seems most reasonable.
However if you have no access to the module in question, you might consider looking at the [runpy.run\_module()](http://docs.python.org/library/runpy.html#runpy.run_module) that executes a module as main. | 900 |
4,879,324 | Suppose I want to include a library:
```
#include <library.h>
```
but I'm not sure it's installed in the system. The usual way is to use tool like autotools. Is there a simpler way in C++? For example in python you can handle it with exceptions. | 2011/02/02 | [
"https://Stackoverflow.com/questions/4879324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/238671/"
] | autotools is the best way to detect at *compile* time. It's very platform-specific, but assuming you're on Linux or similar, [dlopen](http://linux.die.net/man/3/dlopen) is how you check at *runtime*. | As far as I know, there's no way of checking whether a library is installed using code.
However, you could create a bash script that could look for the library in the usual places, like /usr/lib or /usr/local/lib. Also, you could check /etc/ld.so.conf for the folders and then look for the libraries.
Or something like that. | 902 |
14,983,015 | I'm trying to write a large python/bash script which converts my html/css mockups to Shopify themes.
One step in this process is changing out all the script sources. For instance:
```
<script type="text/javascript" src="./js/jquery.bxslider.min.js"></script>
```
becomes
```
<script type="text/javascript" src="{{ 'jquery.bxslider.min.js' | asset_url }}"></script>
```
Here is what I have so far:
```
import re
test = """
<script type="text/javascript" src="./js/jquery-1.8.3.min.js"></script>
<!--<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js" type="text/javascript"></script>-->
<script type="text/javascript" src="./js/ie-amendments.js"></script>
<script type="text/javascript" src="./js/jquery.bxslider.min.js"></script>
<script type="text/javascript" src="./js/jquery.colorbox-min.js"></script>
<script type="text/javascript" src="./js/main.js"></script>
"""
out = re.sub( 'src=\"(.+)\"', 'src="{{ \'\\1\' | asset_url }}"', test, flags=re.MULTILINE )
out
```
prints out
```
'\n <script type="text/javascript" src="{{ \'./js/jquery-1.8.3.min.js\' | asset_url }}"></script>\n <!--<script src="{{ \'http://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js" type="text/javascript\' | asset_url }}"></script>-->\n <script type="text/javascript" src="{{ \'./js/ie-amendments.js\' | asset_url }}"></script>\n <script type="text/javascript" src="{{ \'./js/jquery.bxslider.min.js\' | asset_url }}"></script>\n <script type="text/javascript" src="{{ \'./js/jquery.colorbox-min.js\' | asset_url }}"></script>\n <script type="text/javascript" src="{{ \'./js/main.js\' | asset_url }}"></script>\n'
```
I have two problems so far:
1. Some of the backslash characters I'm using to escape the single
quotes within my regexes are showing up in the output.
2. My capture group is capturing the entire original source string, but
I just need what comes after the last "/"
**ANSWER**:
Per Martijn Pieters helpful suggestion, I checked out the ? regular expression operator, and came up with this solution, which perfectly solved my problem. Also, for the replacement expression, I encapsulated it in double-quotes as opposed to singles, and escaped the doubles, which ended up removing the unnecessary backslashes. Thanks guys!
```
re.sub( r'src=".+?([^/]+?\.js)"', "src=\"{{ '\\1' | asset_url }}\"", test, flags=re.MULTILINE )
``` | 2013/02/20 | [
"https://Stackoverflow.com/questions/14983015",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623223/"
] | You can just go to azure's control panel and add in a virtual directory path.
Please visit this MDSN blog to see how its done.
<http://blogs.msdn.com/b/kaushal/archive/2014/04/19/microsoft-azure-web-sites-deploying-wordpress-to-a-virtual-directory-within-the-azure-web-site.aspx> | If you use Web deploy publish method you can set in Site name `mydomain/mymvcsite` instead of `mydomain`.
At least it works for me for default windows azure site `http://mydomain.azurewebsites.net/mymvcsite`.
Or you can use FTP publish method. | 903 |
52,241,986 | I have the following code to add a table of contents to the beginning of my ipython notebooks. When I run the cell on jupyter on my computer I get
[![enter image description here](https://i.stack.imgur.com/ESBlz.png)](https://i.stack.imgur.com/ESBlz.png)
But when I upload the notebook to github and choose to view the notebook, I see this for the first cell.
[![enter image description here](https://i.stack.imgur.com/RSk4Z.png)](https://i.stack.imgur.com/RSk4Z.png)
Is there a way to enforce that this javascript line runs like the first picture when on github? | 2018/09/09 | [
"https://Stackoverflow.com/questions/52241986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10171138/"
] | Attachments (and rich text) are not yet supported by the POST /chatthreads API. The only way to post messages with attachments today is with our bot APIs.
We are working on write APIs to match our recently-released read APIs but they aren't ready yet. There's no need to put anything on UserVoice though.
Unfortunately I don't have a date to share, but we are actively working on them. | >
> APIs under the /beta version in Microsoft Graph are in preview and are subject to change. Use of these APIs in production applications is not supported.
>
>
>
The returned value of **attachment** in the document is the embodiment of the product group design, and we cannot get the value should be the product group is still developing and improvementing the API. So there are no other workground at this moment.
For adding, no official docs have announced we can add attachments by Graph API. And based on my test, the try all failed too. So we need to submit a feature request in [UserVocie](https://officespdev.uservoice.com/) for directly way or research by ourselves for a none-official workaround. | 906 |
22,521,912 | I'm in the directory `/backbone/` which has a `main.js` file within scripts. I run `python -m SimpleHTTPServer` from the `backbone` directory and display it in the browser and the console reads the error `$ is not defined` and references a completely different `main.js` file from something I was working on days ago with a local python server.
I am new to this and don't have an idea what's going on. Would love some suggestions if you have time. | 2014/03/20 | [
"https://Stackoverflow.com/questions/22521912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2066353/"
] | i have the same problem and solved it by (( Go to Setting >> Locations> mode>> Battery savings >> then restart your device and set up again your app )) | Google map now uses this to enable the My Location layer on the Map.
```
mMap.setMyLocationEnabled(true);
```
You can view the documentation for Google Maps Android API v2 [here](https://developers.google.com/maps/documentation/android/location).
They're using Location Client now to Making Your App Location-Aware,
you can also refer here for more information from these [tutorial](http://developer.android.com/training/location/index.html). | 907 |
7,033,192 | ```
#!/usr/bin/python
import os,sys
from os import path
input = open('/home/XXXXXX/ERR001268_1', 'r').read().split('\n')
at = 1
for lines in range(0, len(input)):
line1 = input[lines]
line4 = input[lines+3]
num1 = line1.split(':')[4].split()[0]
num4 = line4.split(':')[4].split()[0]
print num1,num4
at += 1
```
However I got the error: list index out of range
What's the problem here?
btw, besides `"at +=1"`, is there any other way to finish this cycle loop?
thx | 2011/08/11 | [
"https://Stackoverflow.com/questions/7033192",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/815408/"
] | The problem is that `lines` has a maximum value of `len(input)-1` but then you let `line4` be `lines + 3`. So, when you're at your last couple of lines, `lines + 3` will be larger than the length of the list.
```
for lines in range(0, len(input)):
line1 = input[lines]
line4 = input[lines+3]
num1 = line1.split(':')[4].split()[0]
num4 = line4.split(':')[4].split()[0]
print num1,num4
``` | It seems that you want to read a file and get some info from it every 3 lines. I would recommend something simpler:
```
def get_num(line):
return line.split(':')[4].split()[0]
nums1 = [get_num(l) for l in open(fn, "r").readlines()]
nums2 = nums1[3:]
for i in range(len(nums2)):
print nums1[i],nums2[i]
```
The last 3 numbers of nums1 won't be written. The variable at does not do anything in your code. | 908 |
34,406,393 | I try to make a script allowing to loop through a list (*tmpList = openFiles(cop\_node)*). This list contains 5 other sublists of 206 components.
The last 200 components of the sublists are string numbers ( a line of 200 string numbers for each component separated with a space character).
I need to loop through the main list and create a new list of 5 components, each new component containing the 200\*200 values in float.
My actual code is try to add a second loop to an older code working with the equivalent of one sublist. But python return an error *"Index out of range"*
```
def valuesFiles(cop_node):
tmpList = openFiles(cop_node)
valueList = []
valueListStr = []*len(tmpList)
for j in range (len(tmpList)):
tmpList = openFiles(cop_node)[j][6:]
tmpList.reverse()
for i in range (len(tmpList)):
splitList = tmpList[i].split(' ')
valueListStr[j].extend(splitList)
#valueList.append(float(valueListStr[j][i]))
return(valueList)
``` | 2015/12/21 | [
"https://Stackoverflow.com/questions/34406393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5698216/"
] | `valueListStr = []*len(tmpList)` does not do what you think it does, if you want a list of lists use a *list comp* with range:
```
valueListStr = [[] for _ in range(len(tmpList))]
```
That will create a list of lists:
```
In [9]: valueListStr = [] * i
In [10]: valueListStr
Out[10]: []
In [11]: valueListStr = [[] for _ in range(i)]
In [12]: valueListStr
Out[12]: [[], [], [], []]
```
So why you get an error is because of `valueListStr[j].extend(splitList)`, you cannot index an empty list.
You don't actually seem to return the list anywhere so I presume you actually want to actually return it, you can also just create lists inside the loop as needed, you can also just loop over `tmpList` and `openFiles(cop_node)`:
```
def valuesFiles(cop_node):
valueListStr = []
for j in openFiles(cop_node):
tmpList = j[6:]
tmpList.reverse()
tmp = []
for s in tmpList:
tmp.extend(s.split(' '))
valueListStr.append(tmp)
return valueListStr
```
Which using `itertools.chain` can become:
```
from itertools import chain
def values_files(cop_node):
return [list(chain(*(s.split(' ') for s in reversed(sub[6:]))))
for sub in openFiles(cop_node)]
``` | ```
def valuesFiles(cop_node):
valueListStr = []
for j in openFiles(cop_node):
tmpList = j[6:]
tmpList.reverse()
tmp = []
for s in tmpList:
tmp.extend(s.split(' '))
valueListStr.append(tmp)
return valueListStr
```
After little modification I get it to work as excepted :
```
def valuesFiles(cop_node):
valueList = []
for j in range (len(openFiles(cop_node))):
tmpList = openFiles(cop_node)[j][6:]
tmpList.reverse()
tmpStr =[]
for s in tmpList:
tmpStr.extend(s.split(' '))
tmp = []
for t in tmpStr:
tmp.append(float(t))
valueList.append(tmp)
return(valueList)
```
I don't understand why but the first loop statement didn't work. At the end the I had empty lists like so : [[],[],[],[],[]] . That's why I changed the beginning. Finally I converted the strings to floats. | 912 |
2,063,124 | I am trying to read a \*.wav file using scipy. I do it in the following way:
```
import scipy.io
x = scipy.io.wavfile.read('/usr/share/sounds/purple/receive.wav')
```
As a result I get the following error message:
```
Traceback (most recent call last):
File "test3.py", line 1, in <module>
import scipy.io
File "/usr/lib/python2.5/site-packages/scipy/io/__init__.py", line 23, in <module>
from numpy.testing import NumpyTest
ImportError: cannot import name NumpyTest
```
Does anybody know why scipy cannot import NumpyTest and how it can be fixed? | 2010/01/14 | [
"https://Stackoverflow.com/questions/2063124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/245549/"
] | Looks like you have upgraded your numpy version but haven't installed a [corresponding scipy version](http://projects.scipy.org/scipy/ticket/916). | Do you have numpy installed? The package is most likely called `numpy` or `python-numpy` if you are running Linux
If your OS package manager does not have numpy package, download it from [here](http://sourceforge.net/projects/numpy/files/) | 913 |
49,016,216 | Is there a way to know which element has failed the `any` built-in function?
I was trying to solve [Euler 5](https://projecteuler.net/problem=5) and I want to find for which numbers my product isn't evenly divisible. Using the for loop it's easy to figure it out, but is it possible with `any` also?
```
from operator import mul
primes_under_20 = [2,3,5,7,11,13,17,19]
product = reduce(mul, primes_under_20, 1)
if any((product % i != 0 for i in range(1,21))):
print "not evenly divisible"
# how can I find out that 4 was the element that failed?
# must I use this for loop?
for i in range(1,21):
if product % i != 0:
print i # prints 4
break
```
I read [here](https://docs.python.org/2/library/functions.html#any) that `any` is equivalent to:
```
def any(iterable):
for element in iterable:
if element:
return True
return False
```
but is this the exact implementation, or is there a hiding `yield` there, or something like this that can help lock on the element? | 2018/02/27 | [
"https://Stackoverflow.com/questions/49016216",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3512538/"
] | Is there any good reason to use `any`?
If you want an one-liner to find out which numbers are not evenly divisible :
```
not_divisible = [i for i in range(1, 21) if product % i != 0]
if len(not_divisible) > 0:
print(not_divisible)
```
You can't really get all the non-divisible numbers with `any`, since it stops when it finds the first `False` in the iterable | I probably wouldn't recommend actually doing this, as it feels a bit hacky (and uglier than just scrapping the `any()` for a `for` loop). That disclaimer aside, this could technically be accomplished by exploiting an iterator and `any()`'s property of stopping once it's found a truthy value:
```
rangemax = 21
rng = iter(range(1,rangemax))
if any(product % i != 0 for i in rng):
print "not evenly divisible"
try:
print next(rng) - 1
except StopIteration:
print rangemax - 1
```
This creates an iterator based on the `range()`, then runs `any()`, and if it evaluates `True`, you check the next item in the iterator (or the max number if you're at the end of the iterator) and subtract one. | 914 |
6,712,051 | I'm trying to do something that seems very simple, and falls within the range of standard python. The following function takes a collection of sets, and returns all of the items that are contained in two or more sets.
To do this, while the collection of sets is not empty, it simply pops one set out of the collection, intersects it with the remaining sets, and updates a set of items that fall in one of these intersections.
```
def cross_intersections(sets):
in_two = set()
sets_copy = copy(sets)
while sets_copy:
comp = sets_copy.pop()
for each in sets_copy:
new = comp & each
print new, # Print statements to show that these references exist
print in_two
in_two |= new #This is where the error occurs in IronPython
return in_two
```
Above is the function I'm using. To test it, in CPython, the following works:
```
>>> a = set([1,2,3,4])
>>> b = set([3,4,5,6])
>>> c = set([2,4,6,8])
>>> cross = cross_intersections([a,b,c])
set([2, 4]) set([])
set([4, 6]) set([2, 4])
set([3, 4]) set([2, 4, 6])
>>> cross
set([2, 3, 4, 6])
```
However, when I try to use IronPython:
```
>>> b = cross_intersections([a,b,c])
set([2, 4]) set([])
set([4, 6]) set([2, 4])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:/path/to/code.py", line 10, in cross_intersections
SystemError: Object reference not set to an instance of an object.
```
In the title I said this was a mysterious null pointer exception. I probably have no idea how .NET handles null pointers (I've never worked with a C-like language, and have only been using IronPython for a month or so), but if my understanding is correct, it occurs when you attempt to access some property of an object that points to `null`.
In this case, the error occurs at line 10 of my function: `in_two |= new`. However, I've put `print` statements right before this line that (at least to me) indicate that neither of these objects point to `null`.
Where am I going wrong? | 2011/07/15 | [
"https://Stackoverflow.com/questions/6712051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/173292/"
] | [It's a bug](http://ironpython.codeplex.com/workitem/30386). It will be fixed in 2.7.1, but I don't think the fix is in the 2.7.1 Beta 1 release. | This is a [bug](http://ironpython.codeplex.com/workitem/30386) still present in the 2.7.1 Beta 1 release.
It has been fixed in [master](https://github.com/IronLanguages/main), and the fix will be included in the next release.
```
IronPython 3.0 (3.0.0.0) on .NET 4.0.30319.235
Type "help", "copyright", "credits" or "license" for more information.
>>> import copy
>>>
>>> def cross_intersections(sets):
... in_two = set()
... sets_copy = copy.copy(sets)
... while sets_copy:
... comp = sets_copy.pop()
... for each in sets_copy:
... new = comp & each
... print new, # Print statements to show that these references exist
... print in_two
... in_two |= new # This is where the error occurs in IronPython
... return in_two
...
>>>
>>> a = set([1,2,3,4])
>>> b = set([3,4,5,6])
>>> c = set([2,4,6,8])
>>>
>>> cross = cross_intersections([a,b,c])
set([2, 4]) set([])
set([4, 6]) set([2, 4])
set([3, 4]) set([2, 4, 6])
``` | 915 |
3,400,847 | I'm a mechanical engineering student, and I'm building a physical simulation using PyODE.
instead of running everything from one file, I wanted to organize stuff in modules so I had:
* main.py
* callback.py
* helper.py
I ran into problems when I realized that helper.py needed to reference variables from main, but main was the one importing helper!
so my solution was to create a 4th file, which houses variables and imports only external modules (such as time and random).
so I now have:
* main.py
* callback.py
* helper.py
* parameters.py
and all scripts have: `import parameters` and use: `parameters.foo` or `parameters.bar`.
Is this an acceptable practice or is this a sure fire way to make python programmers puke? :)
Please let me know if this makes sense, or if there is a more sensible way of doing it!
Thanks,
-Leav | 2010/08/03 | [
"https://Stackoverflow.com/questions/3400847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/216547/"
] | Uhm, i think it does not make sence if this happens: "realized that helper.py needed to reference variables from main", your helper functions should be independent from your "main code", otherwise i think its ugly and more like a design failure. | I'm not too sure about if that's good practice but if you use classes, I don't see why there should be a problem. Or am I missing something?
If you want to be able to just run each script independently too, and that's what is keeping you from going object oriented then you could do something like the following at the end of your script.
```
if __name__ == '__main__':
# Code that you want to run when the script is executed.
# This block will not be executed if the script is imported.
```
Read more about classes in Python [here](http://docs.python.org/tutorial/classes.html). | 916 |
42,292,272 | How to identify the link, I have inspected the elements which are as below :
```
<div class="vmKOT" role="navigation">
<a class="Ml68il" href="https://www.google.com" aria-label="Search" data-track-as="Welcome Header Search"></a>
<a class="WaidDw" href="https://mail.google.com" aria-label="Mail" data-track-as="Welcome Header Mail"></a>
<a class="a4KP9d" href="https://maps.google.com" aria-label="Maps" data-track-as="Welcome Header Maps"></a>
<a class="QJOPee" href="https://www.youtube.com" aria-label="YouTube" data-track-as="Welcome Header YouTube"></a>
</div>
```
I want to identify the class `WaidDw` or `href` and `click` it using `python`. | 2017/02/17 | [
"https://Stackoverflow.com/questions/42292272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7579389/"
] | You can try
```
driver.find_element_by_class_name('WaidDw').click()
```
or
```
driver.find_element_by_xpath('//a[@href="https://mail.google.com" and @aria-label="Mail"]').click()
``` | In your provided HTML all attribute's values are unique, you can locate easily that element by using their attribute value.
As your question points to locate this `<a class="WaidDw" href="https://mail.google.com" aria-label="Mail" data-track-as="Welcome Header Mail"></a>` element. I'm providing you multiple `cssSelectors` which can work easily to identify the same element as below :-
* `a.WaidDw`
* `a.WaidDw[href='https://mail.google.com']`
* `a.WaidDw[aria-label='Mail']`
* `a.WaidDw[data-track-as='Welcome Header Mail']`
* `a.WaidDw[href='https://mail.google.com'][aria-label='Mail']`
* `a.WaidDw[href='https://mail.google.com'][aria-label='Mail'][data-track-as='Welcome Header Mail']`
---
**Note :-** Keep in practice (priority) to use `cssSelector` instead `xpath` if possible, because [`cssSelectors` perform far better than `xpath`](https://stackoverflow.com/questions/16788310/what-is-the-difference-between-css-selector-xpath-which-is-betteraccording-t)
---
[Locating Element by CSS Selectors using python](http://selenium-python.readthedocs.io/locating-elements.html#locating-elements-by-css-selectors) :-
```
element = driver.find_element_by_css_selector('use any one of the given above css selector')
```
[Clicks the element :-](http://selenium-python.readthedocs.io/api.html#selenium.webdriver.remote.webelement.WebElement.click)
```
element.click()
```
---
**Reference link :-**
* <https://www.w3schools.com/cssref/css_selectors.asp>
* <https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Selectors> | 925 |
36,244,077 | So here is a breakdown of the task:
1) I have a 197x10 2D numpy array. I scan through this and identify specific cells that are of interest (criteria that goes into choosing these cells is not important.) These cells are not restricted to one specific area of the matrix.
2) I have 3247 other 2D Numpy arrays with the same dimension. For a single one of these other arrays, I need to take the cell locations of interest specified by step 1) and take the average of all of these (sum them all together and divide by the number of cell locations of interest.)
3) I need to repeat 2) for each of the other 3246 remaining arrays.
What is the best/most efficient way to "mark" the cells of interest and look at them quickly in the 3247 arrays?
**--sample on smaller set--**
Let's say given a 2x2 array:
[1, 2]
[3, 4]
Perhaps the cells of interest are the ones that contain 1 and 4. Therefore, for the following arrays:
[5, 6]
[7, 8]
and
[9, 10]
[11, 12]
I would want to take (5+8)/2 and record that somewhere.
I would also want to take (9+12)/2 and record that somewhere.
**EDIT**
Now if I wanted to find these cells of interest in a pythonic way (using Numpy) with the following criteria:
-start at the first row and check the first element
-continue to go down rows in that column marking elements that satisfy condition
-Stop on the first element that does not satisfy the condition and then go to the next column.
So basically now I want to just keep the row-wise (for a specific column) contiguous cells that are of interest. So for 1), if the array looks like:
```
[1 2 3]
[4 5 6]
[7 8 9]
```
And 1,4, 2, 8, and 3 were of interest, I'd only mark 1, 4, 2, 3, since 5 disqualifies 8 as being included. | 2016/03/27 | [
"https://Stackoverflow.com/questions/36244077",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3973851/"
] | Pythonic way:
```
answers = []
# this generates index matrix where the condition is met.
idx = np.argwhere( your condition of (1) matrix comes here)
for array2d in your_3247_arrays:
answer = array2d[idx].mean()
answers.append()
print(answers)
``` | Here is an example:
```
import numpy as np
A = np.random.rand(197, 10)
B = np.random.rand(3247, 197, 10)
loc = np.where(A > 0.9)
B[:, loc[0], loc[1]].mean(axis=1)
``` | 926 |
47,540,186 | I want to dynamically start clusters from my Jupyter notebook for specific functions. While I can start the cluster and get the engines running, I am having two issues:
(1) I am unable to run the ipcluster command in the background. When I run the command through notebook, the cell is running till the the time the clusters are running, i.e. I can't run further cells in the same notebook. I can use the engines once they are fired in a different notebook. How can I run ipcluster in the background?
(2) My code is always starting 8 engines, regardless of the setting in ipcluster\_config.py.
Code:
```
server_num = 3
ip_new = '10.0.1.' + str(10+server_num)
cluster_profile = "ssh" + str(server_num)
import commands
import time
commands.getstatusoutput("ipython profile create --parallel --profile=" + cluster_profile)
text = """
c.SSHEngineSetLauncher.engines = {'""" +ip_new + """' : 12}
c.LocalControllerLauncher.controller_args = ["--ip=10.0.1.163"]
c.SSHEngineSetLauncher.engine_cmd = ['/home/ubuntu/anaconda2/pkgs/ipyparallel-6.0.2-py27_0/bin/ipengine']
"""
with open("/home/ubuntu/.ipython/profile_" + cluster_profile + "/ipcluster_config.py", "a") as myfile:
myfile.write(text)
result = commands.getstatusoutput("(ipcluster start --profile='"+ cluster_profile+"' &)")
time.sleep(120)
print(result[1])
``` | 2017/11/28 | [
"https://Stackoverflow.com/questions/47540186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4059923/"
] | When I saw your answer unanswered on StackOverflow, I almost had a heart attack because I had the same problem.
But running the
```
ipcluster start --help
```
command showed this:
```
--daemonize
```
This makes it run in the background.
So in your notebook you can do this:
```
no_engines = 6
!ipcluster start -n {no_engines} --daemonize
```
**Note:** This does not work on Windows according to
```
ipcluster start --help
``` | I am not familiar with the details of the `commands` module (it's been deprecated since 2.6, according to <https://docs.python.org/2/library/commands.html>) but I know that with the `subprocess` module capturing output will make the make the interpreter block until the system call completes.
Also, the number of engines can be set from the command line if you're using the `ipcluster` command, even without adjusting the configuration files. So, something like this worked for me:
```
from ipyparallel import Client
import subprocess
nengines = 3 # or whatever
subprocess.Popen(["ipcluster", "start", "-n={:d}".format(nengines)])
rc = Client()
# send your jobs to the engines; when done do
subprocess.Popen(["ipcluster", "stop"])
```
This doesn't, of course, address the issue of adding or removing hosts dynamically (which from your code it looks like you may be trying to do), but if you only care how many hosts are available, and not which ones, you can make a default ipcluster configuration which includes all of the possible hosts, and allocate them as needed via code similar to the above.
Note also that it can take a second or two for ipcluster to spin up, so you may want to add a `time.sleep` call between your first `subprocess.Popen` call and trying to spawn the client. | 927 |
40,839,114 | I tried to fine-tune VGG16 on my dataset, but stuck on trouble of opening h5py file of VGG16-weights. I don't understand what does this error mean about:
```
OSError: Unable to open file (Truncated file: eof = 221184, sblock->base_addr = 0, stored_eoa = 58889256)
```
Does anyone know how to fix it? thanks
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-3-6059faca8ed7> in <module>()
9 K.set_session(sess)
10 input_tensor=Input(shape=(h,w,ch))
---> 11 base_model=VGG16(input_tensor=input_tensor, include_top=False)
12 x_img=base_model.output
13 x_img=AveragePooling2D((7,7))(x_img)
/Users/simin/anaconda/envs/IntroToTensorFlow/lib/python3.5/site-packages/keras/applications/vgg16.py in VGG16(include_top, weights, input_tensor)
144 TF_WEIGHTS_PATH_NO_TOP,
145 cache_subdir='models')
--> 146 model.load_weights(weights_path)
147 if K.backend() == 'theano':
148 convert_all_kernels_in_model(model)
/Users/simin/anaconda/envs/IntroToTensorFlow/lib/python3.5/site-packages/keras/engine/topology.py in load_weights(self, filepath, by_name)
2492 '''
2493 import h5py
-> 2494 f = h5py.File(filepath, mode='r')
2495 if 'layer_names' not in f.attrs and 'model_weights' in f:
2496 f = f['model_weights']
/Users/simin/anaconda/envs/IntroToTensorFlow/lib/python3.5/site-packages/h5py/_hl/files.py in __init__(self, name, mode, driver, libver, userblock_size, swmr, **kwds)
270
271 fapl = make_fapl(driver, libver, **kwds)
--> 272 fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
273
274 if swmr_support:
/Users/simin/anaconda/envs/IntroToTensorFlow/lib/python3.5/site-packages/h5py/_hl/files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
90 if swmr and swmr_support:
91 flags |= h5f.ACC_SWMR_READ
---> 92 fid = h5f.open(name, flags, fapl=fapl)
93 elif mode == 'r+':
94 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
h5py/_objects.pyx in h5py._objects.with_phil.wrapper (/Users/ilan/minonda/conda-bld/work/h5py/_objects.c:2696)()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper (/Users/ilan/minonda/conda-bld/work/h5py/_objects.c:2654)()
h5py/h5f.pyx in h5py.h5f.open (/Users/ilan/minonda/conda-bld/work/h5py/h5f.c:1942)()
OSError: Unable to open file (Truncated file: eof = 221184, sblock->base_addr = 0, stored_eoa = 58889256)
``` | 2016/11/28 | [
"https://Stackoverflow.com/questions/40839114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7218907/"
] | There is a possibility that the download of the file has failed.
Replacing a file that failed to open with the following file may resolve it.
<https://github.com/fchollet/deep-learning-models/releases>
My situation,
That file was in the following path.
C:\Users\MyName\.keras\models\vgg16\_weights\_tf\_dim\_ordering\_tf\_kernels.h5
I replaced it and solved it. | because the last time you file download is failed.but the bad file remains in the filepath. so you have to find the bad file.
maybe u can use “find / -name 'vgg16\_weights\_tf\_dim\_ordering\_tf\_kernels\_notop.h5'” to find the path in unix-system.
then delete it
try agian!good luck!! | 928 |
73,336,040 | I'm calling `subprocess.Popen` on an `exe` file in [this script](https://github.com/PolicyEngine/openfisca-us/blob/6ae4e65f6883be598f342c445de1d52430db6b28/openfisca_us/tools/dev/taxsim/generate_taxsim_tests.py#L146), and it [throws](https://gist.github.com/MaxGhenis/b0eb890232363ed30efc1be505e1f257#file-gistfile1-txt-L249):
>
> OSError: [Errno 8] Exec format error: '/Users/maxghenis/PolicyEngine/openfisca-us/openfisca\_us/tools/dev/taxsim/taxsim35.exe'
>
>
>
The [answer](https://stackoverflow.com/a/30551364/1840471) to [subprocess.Popen(): OSError: [Errno 8] Exec format error in python?](https://stackoverflow.com/q/26807937/1840471) suggests adding `#!/bin/sh` to the top of the shell script. What's an analog to that for calling an exe instead of a shell script?
I'm on a Mac, and the file it's running is <https://taxsim.nber.org/stata/taxsim35/taxsim35-unix.exe>. | 2022/08/12 | [
"https://Stackoverflow.com/questions/73336040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1840471/"
] | `subprocess` can only start programs that your operating system knows how to execute.
Your `taxsim35-unix.exe` is a Linux executable. MacOS cannot run them.
You'll need to either use a Linux machine to run this executable (real or virtual), or get a version compiled for Mac. <https://back.nber.org/stata//taxsim35/taxsim35-osx.exe> is likely to be the latter. | I worked with the developers of taxsim for a while. I believe that the .exe files generated by taxsim were originally msdos only and then moved to Linux. They're not intended to be run by MacOS. I don't know if they ever released the FORTRAN code that generates them so that they can be run on MacOS. Your best bet for running taxsim (since there's no close open-source substitute) is to spin up an AWS or other Linux server. | 930 |
48,590,488 | Beginner with python - I'm looking to create a dictionary mapping of strings, and the associated value. I have a dataframe and would like create a new column where if the string matches, it tags the column as x.
```
df = pd.DataFrame({'comp':['dell notebook', 'dell notebook S3', 'dell notepad', 'apple ipad', 'apple ipad2', 'acer chromebook', 'acer chromebookx', 'mac air', 'mac pro', 'lenovo x4'],
'price':range(10)})
```
For Example I would like to take the above `df` and create a new column `df['company']` and set it to a mapping of strings.
I was thinking of doing something like
```
product_map = {'dell':'Dell Inc.',
'apple':'Apple Inc.',
'acer': 'Acer Inc.',
'mac': 'Apple Inc.',
'lenovo': 'Dell Inc.'}
```
Then I wanted to iterate through it to check the `df.comp` column and see if each entry contained one of those strings, and to set the `df.company` column to the value in the dictionary.
Not sure how to do this correctly though. | 2018/02/02 | [
"https://Stackoverflow.com/questions/48590488",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7237997/"
] | There are many ways to do this. One way to do it would be the following:
```
def like_function(x):
group = "unknown"
for key in product_map:
if key in x:
group = product_map[key]
break
return group
df['company'] = df.comp.apply(like_function)
``` | A vectorized solution inspired by [MaxU](https://stackoverflow.com/users/5741205/maxu)'s solution to a [similar problem](https://stackoverflow.com/questions/48510405/pandas-python-datafame-update-a-column/48510563).
```
x = df.comp.str.split(expand=True)
df['company'] = None
df['company'] = df['company'].fillna(x[x.isin(product_map.keys())]\
.ffill(axis=1).bfill(axis=1).iloc[:, 0])
df['company'].replace(product_map, inplace=True)
print(df)
# comp price company
#0 dell notebook 0 Dell Inc.
#1 dell notebook S3 1 Dell Inc.
#2 dell notepad 2 Dell Inc.
#3 apple ipad 3 Apple Inc.
#4 apple ipad2 4 Apple Inc.
#5 acer chromebook 5 Acer Inc.
#6 acer chromebookx 6 Acer Inc.
#7 mac air 7 Apple Inc.
#8 mac pro 8 Apple Inc.
#9 lenovo x4 9 Dell Inc.
``` | 931 |
57,404,906 | I have created an executable via pyinstaller. While running the exe found the error from pandas.
```
Traceback (most recent call last):
File "score_python.py", line 3, in <module>
import pandas as pd, numpy as np
File "d:\virtual\sc\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 627, in exec_module
exec(bytecode, module.__dict__)
File "site-packages\pandas\__init__.py", line 23, in <module>
File "d:\virtual\sc\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 627, in exec_module
exec(bytecode, module.__dict__)
File "site-packages\pandas\compat\__init__.py", line 32, in <module>
ImportError: No module named 'distutils'
```
Has anyone found the same? | 2019/08/08 | [
"https://Stackoverflow.com/questions/57404906",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4949165/"
] | This is an issue with virtualenv from version 16.4.0 onward, as indicated in the following issue on github:
<https://github.com/pyinstaller/pyinstaller/issues/4064>
These workarounds were suggested:
1. In the .spec file, at the line “hiddenimports=[]”, change to "hiddenimports=['distutils']", then run pyinstaller using the spec file.
Tried this, but it didn't work in my case, now distutils module could be found, but it threw an error while importing the module.
2. Downgrade virtualenv to an earlier version.
I downgraded virtualenv to version 16.1.0 and and recreated the execution bundle. The new execution file worked alright in my case. | Found the solution, it's because of the virtual environment.
The error occurred because of the creation of a new virtual environment while creating the project. I have deleted my existing virtual and created new virtual by setting up the python interpreter and opting the `pre-existing interpreter` option.
The IDE will create a virtual named `venv` and copies all the python files from Python/bin to this folder and then import modules from here, by activating the same solved my issue. | 934 |
32,007,199 | Hi I'm currently trying to review some material in my course and I'm having a hard time coming up with a function that we will call 'unique' that produces a list of only unique numbers from a set of lists.
So for python I was thinking of using OOP and using an iterator.
```
>>> You have a list (1, 3, 3, 3, 5)
Return the list (1, 3, 5)
```
This is what I was thinking I'm not sure though.
```
Class Unique:
def __init__(self, s):
self.s = iter(s)
def __iter__(self):
return self
def __next__(self):
```
I'm not sure what do for the next function of this though. I'm also curious to see how to create a function that does the same method as above but in scheme. Thanks in advance for any comments or help. | 2015/08/14 | [
"https://Stackoverflow.com/questions/32007199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5101470/"
] | Probably the most straight forward way to do this is using Python's `set` builtin.
```
def unique(*args):
result = set() # A set guarantees the uniqueness of elements
result = result.union(*args) # Include elements from all args
result = list(result) # Convert the set object to a list
return result
``` | Not necessary but you wanted to make classes.
```
class Unique:
def __init__(self):
self._list = self.user_input()
def user_input(self):
_list = raw_input()
_list = _list.split(' ')
[int(i) for i in _list]
return _list
def get_unique(self):
self._set = set(self._list)
return list(self._set)
obj = Unique()
print obj.get_unique()
``` | 935 |
9,882,358 | I'm writing a quick and dirty maintenace script to delete some rows and would like to avoid having to bring my ORM classes/mappings over from the main project. I have a query that looks similar to:
```
address_table = Table('address',metadata,autoload=True)
addresses = session.query(addresses_table).filter(addresses_table.c.retired == 1)
```
According to everything I've read, if I was using the ORM (not 'just' tables) and passed in something like:
```
addresses = session.query(Addresses).filter(addresses_table.c.retired == 1)
```
I could add a `.delete()` to the query, but when I try to do this using only tables I get a complaint:
```
File "/usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/query.py", line 2146, in delete
target_cls = self._mapper_zero().class_
AttributeError: 'NoneType' object has no attribute 'class_'
```
Which makes sense as its a table, not a class. I'm quite green when it comes to SQLAlchemy, how should I be going about this? | 2012/03/27 | [
"https://Stackoverflow.com/questions/9882358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/459082/"
] | Looking through some code where I did something similar, I believe this will do what you want.
```
d = addresses_table.delete().where(addresses_table.c.retired == 1)
d.execute()
```
Calling `delete()` on a table object gives you a `sql.expression` (if memory serves), that you then execute. I've assumed above that the table is bound to a connection, which means you can just call `execute()` on it. If not, you can pass the `d` to `execute(d)` on a connection.
See docs [here](https://docs.sqlalchemy.org/en/latest/core/selectable.html#sqlalchemy.sql.expression.TableClause.delete). | When you call `delete()` from a query object, SQLAlchemy performs a *bulk deletion*. And you need to choose a **strategy for the removal of matched objects from the session**. See the documentation [here](https://docs.sqlalchemy.org/en/14/orm/query.html#sqlalchemy.orm.Query.delete).
If you do not choose a strategy for the removal of matched objects from the session, then SQLAlchemy will try to evaluate the query’s criteria in Python straight on the objects in the session. **If evaluation of the criteria isn’t implemented, an error is raised.**
This is what is happening with your deletion.
If you only want to delete the records and do not care about the records in the session after the deletion, you can choose the strategy that ignores the session synchronization:
```
address_table = Table('address', metadata, autoload=True)
addresses = session.query(address_table).filter(address_table.c.retired == 1)
addresses.delete(synchronize_session=False)
``` | 940 |
32,255,039 | In ubuntu ubuntu-desktop needs python3-requsts package. But this package contain out-dated requests lib (2.4, current - 2.7). I need fresh version of requests, but i cant install him.
```
$ sudo pip3 install requests --upgrade
Downloading/unpacking requests from https://pypi.python.org/packages/2.7/r/requests/requests-2.7.0-py2.py3-none-any.whl#md5=564fb256f865a79f977e57b79d31659a
Downloading requests-2.7.0-py2.py3-none-any.whl (470kB): 470kB downloaded
Installing collected packages: requests
Found existing installation: requests 2.4.3
Not uninstalling requests at /usr/lib/python3/dist-packages, owned by OS
Successfully installed requests
Cleaning up...
```
Is exist way to install fresh requests in ubuntu 15.04 without virtualenv? | 2015/08/27 | [
"https://Stackoverflow.com/questions/32255039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064115/"
] | Finally, i solved this problem by manually installing requests. Just download archive with package and run:
```
python3 setup.py install
```
This will remove apt-get files and install fresh version. | You'd better be using virtualenv :-).
The clean way to do what you are asking is creating an OS package (a ".deb") with the newer version and installing it with dpkg.
The "unclean" way would be to delete the system-package using apt-get, synaptic, etc... and then use pip to install it on the system Python. That is bad even when the Python package does not conflict with a system created one. Again: virtualenvs are your friend.
(Note that you can create a virtualenv that does not hide your other system-packages - with the `--system-site-packages` option) | 941 |
62,273,175 | I have a python dictionary as below:
```
wordCountMap = {'aaa':1, 'bbz':2, 'bbb':2, 'zzz':10}
```
I want to sort the dictionary such that it is the decreasing order of its values, followed by lexicographically increasing order for keys with same values.
```
result = {'zzz':10, 'bbb':2. 'bbz':2. 'aaa':1}
```
Here, 'bbb' is lexicographically smaller than 'bbz'. I know that in Python 2.x we could use a compare function. How do I do this in Python 3.x ? | 2020/06/09 | [
"https://Stackoverflow.com/questions/62273175",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2846878/"
] | You can convert it to the sorted `list` by keying on the negation of the value, and the original key:
```
resultlist = sorted({'aaa':1, 'bbz':2, 'bbb':2, 'zzz':10}.items(), key=lambda x: (-x[1], x[0]))
```
If it must be converted back to a `dict`, just wrap that in the `dict` constructor:
```
resultdict = dict(sorted({'aaa':1, 'bbz':2, 'bbb':2, 'zzz':10}.items(), key=lambda x: (-x[1], x[0])))
```
You could also do the work in two steps to simplify the `key` function; first sort without a `key` (which will sort on the keys of the `dict`):
```
sortedlist = sorted({'aaa':1, 'bbz':2, 'bbb':2, 'zzz':10}.items())
```
then sort by the values only (since Python's sort is stable, the key order will remain the same when the values are the same).
```
import operator # At top of script
sortedlist.sort(key=operator.itemgetter(1), reverse=True)
```
then convert to a `dict`:
```
result = dict(sortedlist)
```
This *probably* won't be any faster (since it has to sort twice), but it does make each step simpler, and it works even when there is no reasonable way to "negate" the value being sorted (e.g. when the values are also strings). | I got this answer from [here](https://stackoverflow.com/questions/9919342/sorting-a-dictionary-by-value-then-key).
Assuming your dictionary is d, you can get it sorted with:
```
d = {'aaa':1, 'bbz':2, 'bbb':2, 'zzz':10}
newD = [v[0] for v in sorted(d.items(), key=lambda kv: (-kv[1], kv[0]))]
```
newD's value:
```
['zzz', 'bbb', 'bbz', 'aaa']
``` | 942 |
71,883,326 | I'm having trouble figuring out how to do the opposite of the answer to this question (and in R not python).
[Count the amount of times value A occurs with value B](https://stackoverflow.com/questions/47834225/count-the-amount-of-times-value-a-occurs-with-value-b)
Basically I have a dataframe with a lot of combinations of pairs of columns like so:
```
df <- data.frame(id1 = c("1","1","1","1","2","2","2","3","3","4","4"),
id2 = c("2","2","3","4","1","3","4","1","4","2","1"))
```
I want to count, how often all the values in column A occur in the whole dataframe without the values from column B. So the results for this small example would be the output of:
```
df_result <- data.frame(id1 = c("1","1","1","2","2","2","3","3","4","4"),
id2 = c("2","3","4","1","3","4","1","4","2","1"),
count = c("4","5","5","3","5","4","2","3","3","3"))
```
The important criteria for this, is that the final results dataframe is collapsed by the pairs (so in my example rows 1 and 2 are duplicates, and they are collapsed and summed by the total frequency 1 is observed without 2). For tallying the count of occurances, it's important that both columns are examined. I.e. order of columns doesn't matter for calculating the frequency - if column A has 1 and B has 2, this counts the same as if column A has 2 and B has 1.
I can do this very slowly by filtering for each pair, but it's not really feasible for my real data where I have many many different pairs.
Any guidance is greatly appreciated. | 2022/04/15 | [
"https://Stackoverflow.com/questions/71883326",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7257777/"
] | First `paste` the two id columns together to `id12` for later matching. Then use `sapply` to go through all rows to see the records where `id1` appears in `id12` but `id2` doesn't. `sum` that value and only output the `distinct` records. Finally, remove the `id12` column.
```
library(dplyr)
df %>% mutate(id12 = paste0(id1, id2),
count = sapply(1:nrow(.),
function(x)
sum(grepl(id1[x], id12) & !grepl(id2[x], id12)))) %>%
distinct() %>%
select(-id12)
```
Or in base R completely:
```
id12 <- paste0(df$id1, df$id2)
df$count <- sapply(1:nrow(df), function(x) sum(grepl(df$id1[x], id12) & !grepl(df$id2[x], id12)))
df <- df[!duplicated(df),]
```
### Output
```
id1 id2 count
1 1 2 4
2 1 3 5
3 1 4 5
4 2 1 3
5 2 3 5
6 2 4 4
7 3 1 2
8 3 4 3
9 4 2 3
10 4 1 3
``` | A full `tidyverse` version:
```r
library(tidyverse)
df %>%
mutate(id = paste(id1, id2),
count = map(cur_group_rows(), ~ sum(str_detect(id, id1[.x]) & str_detect(id, id2[.x], negate = T))))
``` | 943 |
52,056,004 | I am trying to update the neo4j-flask application to Py2Neo V4 and i could not find how the "find\_one" function has been replaced. (Nicole White used Py2Neo V2)
* <https://nicolewhite.github.io/neo4j-flask/>
* <https://github.com/nicolewhite/neo4j-flask>
* <https://neo4j.com/blog/building-python-web-application-using-flask-neo4j/>
My setup:
* Ubuntu 18.04
* Python 3.6.5
* Neo4j Server version: 3.4.6 (community)
Requirements.txt (the rest of the code is from github repository by Nicole White):
```
atomicwrites==1.2.0
attrs==18.1.0
backcall==0.1.0
bcrypt==3.1.4
certifi==2018.8.24
cffi==1.11.5
click==6.7
colorama==0.3.9
decorator==4.3.0
Flask==1.0.2
ipykernel==4.8.2
ipython==6.5.0
ipython-genutils==0.2.0
itsdangerous==0.24
jedi==0.12.1
Jinja2==2.10
jupyter-client==5.2.3
jupyter-console==5.2.0
jupyter-core==4.4.0
MarkupSafe==1.0
more-itertools==4.3.0
neo4j-driver==1.6.1
neotime==1.0.0
parso==0.3.1
passlib==1.7.1
pexpect==4.6.0
pickleshare==0.7.4
pkg-resources==0.0.0
pluggy==0.7.1
prompt-toolkit==1.0.15
ptyprocess==0.6.0
py==1.6.0
py2neo==4.1.0
pycparser==2.18
Pygments==2.2.0
pytest==3.7.3
python-dateutil==2.7.3
pytz==2018.5
pyzmq==17.1.2
simplegeneric==0.8.1
six==1.11.0
tornado==5.1
traitlets==4.3.2
urllib3==1.22
wcwidth==0.1.7
Werkzeug==0.14.1
```
Error i received when register user:
>
> AttributeError: 'Graph' object has no attribute 'find\_one'
>
>
> "The User.find() method uses py2neo’s Graph.find\_one() method to find
> a node in the database with label :User and the given username,
> returning a py2neo.Node object. "
>
>
>
In Py2Neo V3 the function `find_one` -> <https://py2neo.org/v3/database.html?highlight=find#py2neo.database.Graph.find_one> is available.
In Py2Neo V4 <https://py2neo.org/v4/matching.html> there is not find function anymore.
Someone got an idea on how to solve it in V4 or is downgrading here the way to go? | 2018/08/28 | [
"https://Stackoverflow.com/questions/52056004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4606342/"
] | py2neo v4 has a `first` function that can be used with a `NodeMatcher`. See: <https://py2neo.org/v4/matching.html#py2neo.matching.NodeMatch.first>
That said... v4 has introduced GraphObjects which (so far at least) I've found pretty neat.
In the linked github example Users are created with:
```
user = Node('User', username=self.username, password=bcrypt.encrypt(password))
graph.create(user)
```
and found with
```
user = graph.find_one('User', 'username', self.username)
```
In py2neo v4 I would do this with
```
class User(GraphObject):
__primarykey__ = "username"
username = Property()
password = Property()
lukas = User()
lukas.username = "lukasott"
lukas.password = bcrypt.encrypt('somepassword')
graph.push(lukas)
```
and
```
user = User.match(graph, "lukasott").first()
```
The `first` function, as I understand it, provides the same guarantees that `find_one`, as quoted from the v3 docs "and does not fail if more than one matching node is found." | Building on the [answer above](https://stackoverflow.com/a/52058563), here is a minimal example showing the use of `self.match().first()` instead of `find_one()`.
The attributes are set with `Property()` to provide an accessor to the property of the underlying node. (Documentation here: <https://py2neo.org/v4/ogm.html#py2neo.ogm.Property>)
```py
from py2neo import Graph, Node
from passlib.hash import bcrypt
from py2neo.ogm import GraphObject, Property
graph = Graph()
class User(GraphObject):
__primarykey__ = 'username'
username = Property()
password = Property()
def __init__(self, username):
self.username = username
def find(self):
user = self.match(graph, self.username).first()
return user
def register(self, password):
if not self.find():
user = Node('User', username=self.username, password=bcrypt.encrypt(password))
graph.create(user)
return True
else:
return False
``` | 945 |
32,681,203 | I use iPython mostly via notebooks but also in the terminal. I just created my default profile by running `ipython profile create`.
I can't seem to figure out how to have the profile run several magic commands that I use every time. I tried to look this up online and in a book I'm reading but can't get it to work. For example, if I want `%debug` activated for every new notebook I tried adding these lines to my config file:
```
c.InteractiveShellApp.extensions = ['debug']
```
or
```
c.TerminalPythonApp.extensions = ['debug']
```
and I either get import errors or nothing. My (closely related) questions are the following:
1. What line to do I add to my ipython config file to activate magic commands? Some require parameters, e.g. `%reload_ext autoreload` and `%autoreload 2`. How do I also pass these parameters in the config file?
2. Can I separate which get added for terminal vs. notebooks in a single config file or must I set up separate profiles if I want different magic's activated? (e.g., `matplotlib` inline or not). Do the two lines above affect notebooks vs. terminal settings (i.e., `c.InteractiveShellApp` vs. `c.TerminalPythonApp`)?
Thank you! | 2015/09/20 | [
"https://Stackoverflow.com/questions/32681203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2238779/"
] | Execute magics as follows:
```
get_ipython().magic(u"%reload_ext autoreload")
get_ipython().magic(u"%autoreload 2")
```
You can put those lines in your startup script here:
```
~/.ipython/profile_default/startup/00-first.py
``` | To start for example the %pylab magic command on startup do the following:
```
ipython profile create pylab
```
Add the following code to your .ipython\profile\_pylab\ipython\_config.py
```
c.InteractiveShellApp.exec_lines = ['%pylab']
```
and start ipython
```
ipython --profile=pylab
``` | 950 |
29,658,335 | I'm curious to know if it makes a difference where the '&' operator is used in code when a process has input/output redirection to run a process in the background
What are the differences/are there any differences between these lines of code in terms of running the process in the background. If there are, how can I determine what the differences are going to be?
```
setsid python script.py < /dev/zero &> log.txt &
setsid python script.py < /dev/zero & > log.txt &
setsid python script.py < /dev/zero > log.txt &
setsid python script.py & < /dev/zero > log.txt
``` | 2015/04/15 | [
"https://Stackoverflow.com/questions/29658335",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | ### Control operator
There are two uses of `&` here. One is as a so-called **control operator**. Every command is terminated by a control operator such as `&`, `;` or `<newline>` . The difference between them is that `;` and `<newline>` run the command in the foreground and `&` does it in the background.
```
setsid python script.py < /dev/zero & > log.txt &
setsid python script.py & < /dev/zero > log.txt
```
These two lines, therefore, actually execute two commands each. The first is equivalent to the two commands:
```
setsid python script.py < /dev/zero &
> log.txt &
```
And the second is equivalent to:
```
setsid python script.py &
< /dev/zero > log.txt
```
If you're wondering, yes, `> log.txt` and `< /dev/zero > log.txt` are both legal commands. Lacking a command name, they simply process the redirections: each one creates an empty file called `log.txt`.
### Redirection
```
setsid python script.py < /dev/zero &> log.txt &
```
This version with `&>` is different from the one with `& >`. `&>` without a space is a special **redirection operator** in bash that redirects both stdout and stderr.
```
setsid python script.py < /dev/zero > log.txt &
```
This final version is similar to the previous one except it only redirects stdout to `log.txt`. stderr continues to go to the terminal. | It makes a difference. `&` doubles as a command separator (just like `;` is command separator). What you're really doing in something like
```
setsid python script.py & < /dev/zero > log.txt
```
is running `setsid python script.py` in the background and also running a "null" command (which comes after the `&`) in the foreground (an additional `&` at the end would run it in the background). That "null" command has its stdin redirected to */dev/zero* and its stdout redirected to *log.txt*.
Also, `&>` is a special operator in Bash. `foo &>out` redirects both stdout and stderr to *out* while running `foo`. It is not the same as `foo & >out`, which runs `foo` in the background also redirects the output of a null command to *out*.
(This support for "null" commands is why idioms like `>foo` on a separate line, which you sometimes see in shell scripts, work for truncating a file.) | 951 |
9,343,498 | I'm implementing the component labelling algorithm as in [this paper](http://www.iis.sinica.edu.tw/papers/fchang/1362-F.pdf) using python and opencv. It requires checking the input image pixel-by-pixel and perform the so-called contour tracing subroutine to assign label to the blobs of a binary image.
I manage to have it running, but it seems very slow. Profiling the code shows that the for-loop to access the pixels seems to be the bottleneck. It takes about 200ms for a 256px\*256px image. Here's roughly what I do:
```
for i in image.height:
for j in image.width:
p = image[i, j]
pa = image[i - 1, j]
pb = image[i + 1, j]
# etc...
```
where "image" is a binary opencv image.
I wonder if there's a faster way of doing it so that it's usable also for video applications. I'm targeting something like 40-50ms running time for the same problem size, to get 20-25fps. 10-15fps would probably be acceptable as well (66-100ms running time).
Any hints, ideas what I can do is much appreciated. | 2012/02/18 | [
"https://Stackoverflow.com/questions/9343498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/567989/"
] | I'm not sure if I understand your question, but each key can have only one object associated with it. In your case, you're using an NSString object. If you replaced the NSString with some object that you create, say AnObjectWithAThingAndAPersonAndAPlace, you could have multiple attributes associated with each key.
---
I think I understand what you want now. What you want is not an object with arrays associated to it, but an array of objects. You can do it with NSDictionary objects.
```
- (void)setupArray
{
NSMutableArray *objectArray = [[NSMutableArray alloc] init];
NSMutableDictionary *object1 = [[NSMutableDictionary alloc] init];
[object1 setObject:@"Apple" forKey:@"thing"];
[object1 setObject:@"Alex" forKey:@"person"];
[object1 setObject:@"Alabama" forKey:@"place"];
[object1 setObject:@"Azure" forKey:@"color"];
[objectArray addObject:object1];
NSMutableDictionary *object2 = [[NSMutableDictionary alloc] init];
[object2 setObject:@"Banana" forKey:@"thing"];
[object2 setObject:@"Bill" forKey:@"person"];
[object2 setObject:@"Boston" forKey:@"place"];
[object2 setObject:@"Blue" forKey:@"color"];
[objectArray addObject:object2];
datasource = [NSArray arrayWithArray:objectArray];
}
```
Then in your UITableViewDataSource method
```
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
NSInteger row = [indexPath row];
NSDictionary *object = [datasouce objectAtIndex:row];
...
}
```
and you can retrieve all the strings for that object.
If I were to do something like this, I would probably create a plist file containing the array. Then your setupArray method could look like this:
```
- (void)setupArray
{
NSString *filePath = [[NSBundle mainBundle] pathForResource:@"YourFileName" ofType:@"plist"];
NSDictionary *plistData = [NSDictionary dictionaryWithContentsOfFile:filePath];
datasource = (NSArray*)[plistData objectForKey:@"ObjectsForTableView"];
}
```
---
I though I would add a few more comments...In case it isn't obvious, the objects you add to your dictionary don't have to be NSStrings, they can be any object, such as an NSNumber, which may be useful for you in the case of your baseball players. Also, you may wish to create a custom player object instead of using an NSDictionary. And you may want to have something like a Core Data database where the players are stored and retrieved (instead of hard coding them or getting them from a plist file). I hope my answer can get you started on the right path though. | RIght now your `datasource` object is an NSArray. You need to make it an NSMutableArray. Declare it as an NSMutableArray in your header file and then you can do this:
```
datasource = [[states allKeys] mutableCopy];
[datasource addObject:whatever];
```
But, it sounds like the structure you are actually looking for is an NSMutableArray of NSDictionary objects. Like this:
```
NSDictionary *item = [NSDictionary dictionaryWithObjectsAndKeys:@"object1", @"key1", @"object2", @"key2", nil];
[datasource addObject:item]
```
; | 954 |
51,100,224 | I've written a script in python to get different links leading to different articles from a webpage. Upon running my script I can get them flawlessly. However, the problem I'm facing is that the article links traverse multiple pages as they are of big numbers to fit within a single page. if I click on the next page button, the attached information i can see in the developer tools which in reality produce an ajax call through post request. As there are no links attached to that next page button, I can't find any way to go on to the next page and parse links from there. I've tried with a `post request` with that `formdata` but it doesn't seem to work. Where am I going wrong?
[Link to the landing page containing articles](https://www.ncbi.nlm.nih.gov/pubmed/?term=%222015%22%5BDate+-+Publication%5D+%3A+%223000%22%5BDate+-+Publication%5D)
This is the information I get using chrome dev tools when I click on the next page button:
```
GENERAL
=======================================================
Request URL: https://www.ncbi.nlm.nih.gov/pubmed/
Request Method: POST
Status Code: 200 OK
Remote Address: 130.14.29.110:443
Referrer Policy: origin-when-cross-origin
RESPONSE HEADERS
=======================================================
Cache-Control: private
Connection: Keep-Alive
Content-Encoding: gzip
Content-Security-Policy: upgrade-insecure-requests
Content-Type: text/html; charset=UTF-8
Date: Fri, 29 Jun 2018 10:27:42 GMT
Keep-Alive: timeout=1, max=9
NCBI-PHID: 396E3400B36089610000000000C6005E.m_12.03.m_8
NCBI-SID: CE8C479DB3510951_0083SID
Referrer-Policy: origin-when-cross-origin
Server: Apache
Set-Cookie: ncbi_sid=CE8C479DB3510951_0083SID; domain=.nih.gov; path=/; expires=Sat, 29 Jun 2019 10:27:42 GMT
Set-Cookie: WebEnv=1Jqk9ZOlyZSMGjHikFxNDsJ_ObuK0OxHkidgMrx8vWy2g9zqu8wopb8_D9qXGsLJQ9mdylAaDMA_T-tvHJ40Sq_FODOo33__T-tAH%40CE8C479DB3510951_0083SID; domain=.nlm.nih.gov; path=/; expires=Fri, 29 Jun 2018 18:27:42 GMT
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
Transfer-Encoding: chunked
Vary: Accept-Encoding
X-UA-Compatible: IE=Edge
X-XSS-Protection: 1; mode=block
REQUEST HEADERS
========================================================
Accept: text/html, */*; q=0.01
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Connection: keep-alive
Content-Length: 395
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Cookie: ncbi_sid=CE8C479DB3510951_0083SID; _ga=GA1.2.1222765292.1530204312; _gid=GA1.2.739858891.1530204312; _gat=1; WebEnv=18Kcapkr72VVldfGaODQIbB2bzuU50uUwU7wrUi-x-bNDgwH73vW0M9dVXA_JOyukBSscTE8Qmd1BmLAi2nDUz7DRBZpKj1wuA_QB%40CE8C479DB3510951_0083SID; starnext=MYGwlsDWB2CmAeAXAXAbgA4CdYDcDOsAhpsABZoCu0IA9oQCZxLJA===
Host: www.ncbi.nlm.nih.gov
NCBI-PHID: 396E3400B36089610000000000C6005E.m_12.03
Origin: https://www.ncbi.nlm.nih.gov
Referer: https://www.ncbi.nlm.nih.gov/pubmed
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36
X-Requested-With: XMLHttpRequest
FORM DATA
========================================================
p$l: AjaxServer
portlets: id=relevancesortad:sort=;id=timelinead:blobid=NCID_1_120519284_130.14.22.215_9001_1530267709_1070655576_0MetA0_S_MegaStore_F_1:yr=:term=%222015%22%5BDate%20-%20Publication%5D%20%3A%20%223000%22%5BDate%20-%20Publication%5D;id=reldata:db=pubmed:querykey=1;id=searchdetails;id=recentactivity
load: yes
```
This is my script so far (the get request is working flawlessly if uncommented, but for the first page):
```
import requests
from urllib.parse import urljoin
from bs4 import BeautifulSoup
geturl = "https://www.ncbi.nlm.nih.gov/pubmed/?term=%222015%22%5BDate+-+Publication%5D+%3A+%223000%22%5BDate+-+Publication%5D"
posturl = "https://www.ncbi.nlm.nih.gov/pubmed/"
# res = requests.get(geturl,headers={"User-Agent":"Mozilla/5.0"})
# soup = BeautifulSoup(res.text,"lxml")
# for items in soup.select("div.rslt p.title a"):
# print(items.get("href"))
FormData={
'p$l': 'AjaxServer',
'portlets': 'id=relevancesortad:sort=;id=timelinead:blobid=NCID_1_120519284_130.14.22.215_9001_1530267709_1070655576_0MetA0_S_MegaStore_F_1:yr=:term=%222015%22%5BDate%20-%20Publication%5D%20%3A%20%223000%22%5BDate%20-%20Publication%5D;id=reldata:db=pubmed:querykey=1;id=searchdetails;id=recentactivity',
'load': 'yes'
}
req = requests.post(posturl,data=FormData,headers={"User-Agent":"Mozilla/5.0"})
soup = BeautifulSoup(req.text,"lxml")
for items in soup.select("div.rslt p.title a"):
print(items.get("href"))
```
Btw, the url in the browser becomes "<https://www.ncbi.nlm.nih.gov/pubmed>" when I click on the next page link.
I don't wish to go for any solution related to any browser simulator. Thanks in advance. | 2018/06/29 | [
"https://Stackoverflow.com/questions/51100224",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9189799/"
] | The content is heavily dynamic, so it would be best to use `selenium` or similar clients, but I realize that this wouldn't be practical as the number of results is so large. So, we'll have to analyse the HTTP requests submitted by the browser and simulate them with `requests`.
The contents of next page are loaded by POST request to `/pubmed`, and the post data are the input fields of the `EntrezForm` form. The form submission is controlled by js (triggered when 'next page' button is clicked), and is preformed with the `.submit()` method.
After some examination I discovered some interesting fields:
* `EntrezSystem2.PEntrez.PubMed.Pubmed_ResultsPanel.Pubmed_Pager.CurrPage` and
`EntrezSystem2.PEntrez.PubMed.Pubmed_ResultsPanel.Pubmed_Pager.cPage` indicate the current and next page.
* `EntrezSystem2.PEntrez.DbConnector.Cmd` seems to preform a database query. If we don't submit this field the results won't change.
* `EntrezSystem2.PEntrez.PubMed.Pubmed_ResultsPanel.Pubmed_DisplayBar.PageSize` and
`EntrezSystem2.PEntrez.PubMed.Pubmed_ResultsPanel.Pubmed_DisplayBar.PrevPageSize` indicate the number of results per page.
With that information I was able to get multiple pages with the script below.
```
import requests
from urllib.parse import urljoin
from bs4 import BeautifulSoup
geturl = "https://www.ncbi.nlm.nih.gov/pubmed/?term=%222015%22%5BDate+-+Publication%5D+%3A+%223000%22%5BDate+-+Publication%5D"
posturl = "https://www.ncbi.nlm.nih.gov/pubmed/"
s = requests.session()
s.headers["User-Agent"] = "Mozilla/5.0"
soup = BeautifulSoup(s.get(geturl).text,"lxml")
inputs = {i['name']: i.get('value', '') for i in soup.select('form#EntrezForm input[name]')}
results = int(inputs['EntrezSystem2.PEntrez.PubMed.Pubmed_ResultsPanel.Pubmed_ResultsController.ResultCount'])
items_per_page = 100
pages = results // items_per_page + int(bool(results % items_per_page))
inputs['EntrezSystem2.PEntrez.PubMed.Pubmed_ResultsPanel.Pubmed_DisplayBar.PageSize'] = items_per_page
inputs['EntrezSystem2.PEntrez.PubMed.Pubmed_ResultsPanel.Pubmed_DisplayBar.PrevPageSize'] = items_per_page
inputs['EntrezSystem2.PEntrez.DbConnector.Cmd'] = 'PageChanged'
links = []
for page in range(pages):
inputs['EntrezSystem2.PEntrez.PubMed.Pubmed_ResultsPanel.Pubmed_Pager.CurrPage'] = page + 1
inputs['EntrezSystem2.PEntrez.PubMed.Pubmed_ResultsPanel.Pubmed_Pager.cPage'] = page
res = s.post(posturl, inputs)
soup = BeautifulSoup(res.text, "lxml")
items = [i['href'] for i in soup.select("div.rslt p.title a[href]")]
links += items
for i in items:
print(i)
```
I'm requesting 100 items per page because higher numbers seem to 'break' the server, but you should be able to adjust that number with some error checking.
Finally, the links are displayed in descending order (`/29960282`, `/29960281`, ...), so I thought we could calculate the links without preforming any POST requests:
```
geturl = "https://www.ncbi.nlm.nih.gov/pubmed/?term=%222015%22%5BDate+-+Publication%5D+%3A+%223000%22%5BDate+-+Publication%5D"
posturl = "https://www.ncbi.nlm.nih.gov/pubmed/"
s = requests.session()
s.headers["User-Agent"] = "Mozilla/5.0"
soup = BeautifulSoup(s.get(geturl).text,"lxml")
results = int(soup.select_one('[name$=ResultCount]')['value'])
first_link = int(soup.select_one("div.rslt p.title a[href]")['href'].split('/')[-1])
last_link = first_link - results
links = [posturl + str(i) for i in range(first_link, last_link, -1)]
```
But unfortunately the results are not accurate. | Not to treat this question as an XY problem, as, if solved, should pose a very interesting solution BUT I have found a solution for this *specific* issue that is much more efficient: Using the [NCBI's Entrez Programming Utilities](https://www.ncbi.nlm.nih.gov/books/NBK25497/) and a handy, [opensource, unofficial Entrez repo](https://github.com/jordibc/entrez).
With the `entrez.py` script from the Entrez repo in my `PATH`, I've created this script that prints out the links just as you want them:
```
from entrez import on_search
import re
db = 'pubmed'
term = '"2015"[Date - Publication] : "3000"[Date - Publication]'
link_base = f'https://www.ncbi.nlm.nih.gov/{db}/'
def links_generator(db, term):
for line in on_search(db=db, term=term, tool='link'):
match = re.search(r'<Id>([0-9]+)</Id>', line)
if match: yield (link_base + match.group(1))
for link in links_generator(db, term):
print(link)
```
Output:
```
https://www.ncbi.nlm.nih.gov/pubmed/29980165
https://www.ncbi.nlm.nih.gov/pubmed/29980164
https://www.ncbi.nlm.nih.gov/pubmed/29980163
https://www.ncbi.nlm.nih.gov/pubmed/29980162
https://www.ncbi.nlm.nih.gov/pubmed/29980161
https://www.ncbi.nlm.nih.gov/pubmed/29980160
https://www.ncbi.nlm.nih.gov/pubmed/29980159
https://www.ncbi.nlm.nih.gov/pubmed/29980158
https://www.ncbi.nlm.nih.gov/pubmed/29980157
https://www.ncbi.nlm.nih.gov/pubmed/29980156
https://www.ncbi.nlm.nih.gov/pubmed/29980155
https://www.ncbi.nlm.nih.gov/pubmed/29980154
https://www.ncbi.nlm.nih.gov/pubmed/29980153
https://www.ncbi.nlm.nih.gov/pubmed/29980152
https://www.ncbi.nlm.nih.gov/pubmed/29980151
https://www.ncbi.nlm.nih.gov/pubmed/29980150
https://www.ncbi.nlm.nih.gov/pubmed/29980149
https://www.ncbi.nlm.nih.gov/pubmed/29980148
...
```
Which, if compared to the [frontend page](https://www.ncbi.nlm.nih.gov/pubmed/?term=%222015%22%5BDate+-+Publication%5D+%3A+%223000%22%5BDate+-+Publication%5D), are in the same order. :-) | 956 |
11,994,325 | My flask application currently consists of a single `test.py` file with multiple routes and the `main()` route defined. Is there some way I could create a `test2.py` file that contains routes that were not handled in `test.py`?
```
@app.route('/somepath')
def somehandler():
# Handler code here
```
I am concerned that there are too many routes in `test.py` and would like to make it such that I can run `python test.py`, which will also pick up the routes on `test.py` as if it were part of the same file. What changes to I have to make in `test.py` and/or include in `test2.py` to get this to work? | 2012/08/16 | [
"https://Stackoverflow.com/questions/11994325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971888/"
] | I would like to recommend [flask-empty](https://github.com/italomaia/flask-empty/) at GitHub.
It provides an easy way to understand [Blueprints](http://flask.pocoo.org/docs/blueprints/), multiple views and [extensions](http://flask.pocoo.org/docs/extensiondev/). | Dividing the app into blueprints is a great idea. However, if this isn't enough, and if you want to then divide the Blueprint itself into multiple py files, this is also possible using the regular Python module import system, and then looping through all the routes that get imported from the other files.
I created a Gist with the code for doing this:
<https://gist.github.com/Jaza/61f879f577bc9d06029e>
As far as I'm aware, this is the only feasible way to divide up a Blueprint at the moment. It's not possible to create "sub-blueprints" in Flask, although there's an issue open with a lot of discussion about this:
<https://github.com/mitsuhiko/flask/issues/593>
Also, even if it were possible (and it's probably do-able using some of the snippets from that issue thread), sub-blueprints may be too restrictive for your use case anyway - e.g. if you don't want all the routes in a sub-module to have the same URL sub-prefix. | 957 |
45,776,460 | What I'm trying to do is search StackOverflow for answers. I know it's probably been done before, but I'd like to do it again. With a GUI. Anyway that is a little bit down the road as right now i'm just trying to get to the page with the most votes for a question. I noticed while trying to see how to get into a nested div to get the link for the first answer that my search was off and taking me to the wrong place. I am using BeautifulSoup and Requests and python3 to do this.
```
#!/usr/bin/env python3
import requests
from bs4 import BeautifulSoup
payload = {'q': 'open GL cube'}
page = requests.get("https://stackoverflow.com/search",params=payload)
print(" URL IS ", page.url)
data = page.content
soup = BeautifulSoup(data, 'lxml')
top = soup.find('a', {'title':'Highest voted search results'})['href']
print(top)
page2 = requests.get("https://stackoverflow.com",params=top)
print(page2.url)
data2 = page2.content
topSoup = BeautifulSoup(data2, 'lxml')
for div in topSoup.find_all('div', {'class':'result-link'}):
print(div.text)
```
i get the link and it outputs /search?tab=votes&q=open%GL%20cube
but when I pass it in with the params it does
<https://stackoverflow.com/?/search?tab=votes&q=open%GL%20cube>
I would like to get rid of the /?/ | 2017/08/19 | [
"https://Stackoverflow.com/questions/45776460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/977034/"
] | Don't pass it as parameters, just add it to the URL:
```
page2 = requests.get("https://stackoverflow.com" + top)
```
Once you pass `requests` parameters it adds a `?` to the link before concatenating the new parameters to the link.
[Requests - Passing Parameters In URLs](http://docs.python-requests.org/en/master/user/quickstart/#passing-parameters-in-urls)
Also, as stated, you should really use the API. | Why not use the [API](https://api.stackexchange.com/docs)?
There are plenty of search options (<https://api.stackexchange.com/docs/advanced-search>), and you get the response in JSON, no need for ugly HTML parsing. | 967 |
55,318,093 | i am learning and trying to make a snake game in Python3
i am importing turtle
i am using: Linux mint 19, PyCharm, python37, python3-tk
```
Traceback (most recent call last):
File "/home/buszter/PycharmProjects/untitled1/snake.py", line 2, in <module>
import turtle
ModuleNotFoundError: No module named 'turtle'
```
everywhere i am reading turtle should be preinstalled, but i still dont have it :(
i tried `pip install turtle` and says
```
pip install turtle
Collecting turtle
Using cached https://files.pythonhosted.org/packages/ff/f0/21a42e9e424d24bdd0e509d5ed3c7dfb8f47d962d9c044dba903b0b4a26f/turtle-0.0.2.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-kvf9on0y/turtle/setup.py", line 40
except ValueError, ve:
^
SyntaxError: invalid syntax
-------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-kvf9on0y/turtle/
```
EDIT
screenshot of settings of the project in pycharm
[![screenshot of settings of the project in pycharm](https://i.stack.imgur.com/d52jy.png)](https://i.stack.imgur.com/d52jy.png) | 2019/03/23 | [
"https://Stackoverflow.com/questions/55318093",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10909285/"
] | I know that it's kinda old topic, but I had the same problem right now on my Fedora 31.
Reinstalling packages didn't work.
What worked was installing IDLE programming tool (that's just Python IDE for kids), which installs also tkinter module.
I think that installing just `python3-tkinter` (that's how this package is named in Fedora repo) package would work as well, because `turtle` is inside Tk module. | Most probably the python your `Pycharm` is using is not `Python3.7`. Try opening a Python prompt and running import turtle, because it should be packaged into `python` already.
(<https://docs.python.org/3/library/turtle.html>) | 968 |