Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
failed in "sudo pip"
41,135,807
2
13
12,302
0
python,permissions,pip,sudo
if you have 2 versions of pip for example /user/lib/pip and /user/local/lib/pip belongs to python 2.6 and 2.7. you can delete the /user/lib/pip and make a link pip=>/user/local/lib/pip. you can see that the pip commands called from "pip" and "sudo" pip are different. make them consistence can fix it.
0
1
0
0
2016-01-19T08:38:00.000
6
0.066568
false
34,871,994
0
0
1
5
Please help me. server : aws ec2 os : amazon linux python version : 2.7.10 $ pip --version pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7) It's OK. But... $ sudo pip --version Traceback (most recent call last): File "/usr/bin/pip", line 5, in from pkg_resources import load_entry_point File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in working_set = WorkingSet._build_master() File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master return cls._build_from_requirements(__requires__) File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: pip==6.1.1
failed in "sudo pip"
47,222,853
0
13
12,302
0
python,permissions,pip,sudo
Assuming two pip versions are present at /usr/bin/pip & /usr/local/bin/pip where first is present for sudo user & second for normal user. From sudo user you can run below command so it will use higher version of pip for installation. /usr/local/bin/pip install jupyter
0
1
0
0
2016-01-19T08:38:00.000
6
0
false
34,871,994
0
0
1
5
Please help me. server : aws ec2 os : amazon linux python version : 2.7.10 $ pip --version pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7) It's OK. But... $ sudo pip --version Traceback (most recent call last): File "/usr/bin/pip", line 5, in from pkg_resources import load_entry_point File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in working_set = WorkingSet._build_master() File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master return cls._build_from_requirements(__requires__) File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: pip==6.1.1
failed in "sudo pip"
34,874,730
0
13
12,302
0
python,permissions,pip,sudo
As you can see with sudo you run another pip script. With sudo: /usr/bin/pip which is older version; Without sudo: /usr/local/lib/python2.7/site-packages/pip which is the latest version. The error you encountered is sometimes caused by using different package managers, common way to solve it is the one already proposed by @Ali: sudo easy_install --upgrade pip
0
1
0
0
2016-01-19T08:38:00.000
6
0
false
34,871,994
0
0
1
5
Please help me. server : aws ec2 os : amazon linux python version : 2.7.10 $ pip --version pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7) It's OK. But... $ sudo pip --version Traceback (most recent call last): File "/usr/bin/pip", line 5, in from pkg_resources import load_entry_point File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in working_set = WorkingSet._build_master() File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master return cls._build_from_requirements(__requires__) File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: pip==6.1.1
failed in "sudo pip"
34,872,132
17
13
12,302
0
python,permissions,pip,sudo
Try this: sudo easy_install --upgrade pip By executing this you are upgrading the version of pip that sudoer is using.
0
1
0
0
2016-01-19T08:38:00.000
6
1
false
34,871,994
0
0
1
5
Please help me. server : aws ec2 os : amazon linux python version : 2.7.10 $ pip --version pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7) It's OK. But... $ sudo pip --version Traceback (most recent call last): File "/usr/bin/pip", line 5, in from pkg_resources import load_entry_point File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in working_set = WorkingSet._build_master() File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master return cls._build_from_requirements(__requires__) File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: pip==6.1.1
failed in "sudo pip"
39,518,909
24
13
12,302
0
python,permissions,pip,sudo
I had the same problem. sudo which pip sudo vim /usr/bin/pip modify any pip==6.1.1 to pip==8.1.2 or the version you just upgrade to. It works for me.
0
1
0
0
2016-01-19T08:38:00.000
6
1
false
34,871,994
0
0
1
5
Please help me. server : aws ec2 os : amazon linux python version : 2.7.10 $ pip --version pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7) It's OK. But... $ sudo pip --version Traceback (most recent call last): File "/usr/bin/pip", line 5, in from pkg_resources import load_entry_point File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in working_set = WorkingSet._build_master() File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master return cls._build_from_requirements(__requires__) File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: pip==6.1.1
How to use Android shared library in Ubuntu
34,883,727
1
3
1,289
0
android,python,linux,shared-libraries
Most likely not. It's very probably the Android you pull it from is running on the ARM architecture, and therefore the .so library was compiled for that architecture. Unless your desktop machine is also on the ARM architecture (it's most likely x86 and it would have to be specific such as ARMv7) the .so binary will be incompatible on your desktop. Depending on what the .so library actually is, you may be able to grab the source code and compile it for your x86 machine. Disclaimer: Even if you obtain a library compiled for the same architecture as your desktop (from x86 phone), there is no guarantee it will work. It may rely on other libraries provided only by Android, and this may be the start of a very deep rabbit hole.
1
1
0
1
2016-01-19T17:47:00.000
1
0.197375
false
34,883,612
0
0
0
1
I have an .so file which I pulled from an Android APK (Not my app, so I don't have access to the source, just the library) I want to use this shared object on my 32 bit Ubuntu machine, and call some functions from it (Preferably with Python) . Is it possible to convert an Android .so to a Linux .so? Or is there any simple solution to accessing the functions in the .so without resorting to a hefty virtual machine or something? Thanks
how many docker containers should a java web app w/ database have?
34,886,874
1
3
1,322
0
python,tomcat,docker,application-server
You can use Docker Machine to create a Docker development environment on Mac or Windows. This is really good for trial and error. There is no need to for Ubuntu VM. Docker container does one thing only. So your application would consist of multiple containers, one for each component. You've also clearly identified the different containers for your application. Here is how the workflow might look like: Create a Dockerfile for Tomcat container, nginx, postgres, tornado Deploy the application to Tomcat in Dockerfile or by mapping volumes Create image for each of the container Optionally push these images to Docker hub If you plan to deploy these containers on multiple hosts then create an overlay network Use Docker Compose to start these containers together. It would use the network created previously. Alternatively you can also use --x-networking for Docker Compose to create the network.
0
1
0
0
2016-01-19T19:01:00.000
2
0.099668
false
34,884,896
0
0
1
1
I'm trying to "dockerize" my java web application and finally run the docker image on EC2. My application is a WAR file and connects to a database. There is also a python script which the application calls via REST. The python side uses the tornado webserver Question 1: Should I have the following Docker containers? Container for Application Server (Tomcat 7) Container for HTTP Server (nginx of httpd) Container for postgres db Container for python script (this will have tornado web server and my python script). Question 2: What is the best way to build dockerfile? I will have to do trial and error for what commands need to be put into the dockerfile for each container. Should I have an ubuntu VM on which I do trial and error and once I nail down which commands I need then put them into the dockerfile for that container?
Serving large files in AWS
34,899,601
0
0
191
1
python,mysql,amazon-web-services,nas
You also can use MongoDb , it provides several API, and also you can store file in S3 bucket with the use of Multi-Part Upload
0
1
0
0
2016-01-20T09:09:00.000
1
0
false
34,895,738
0
0
1
1
As part of a big system, I'm trying to implement a service that (among other tasks) will serve large files (up to 300MB) to other servers (running in Amazon). This files service needs to have more than one machine up and running at each time, and there are also multiple clients. Service is written in Python, using Tornado web server. First approach was using MySQL, but I figured I'm going to have hell saving such big BLOBs, because of memory consumption. Tried to look at Amazon's EFS, but it's not available in our region. I heard about SoftNAS, and am currently looking into it. Any other good alternatives I should be checking?
Celery Retry not working on AWS Beanstalk running Docker ver.1.6.2(Multi container)
34,942,922
0
0
83
0
python,django,amazon-web-services,celery
Knoob mistake: Turns out that I had another environment with similar code consuming from the same rabbitMQ server. Seems this other environment was picking up the retries.
0
1
0
0
2016-01-20T22:11:00.000
1
1.2
true
34,911,638
0
0
1
1
Am trying to implement retries in one of my celery tasks which works fine on my local development environment but doesn't execute retries when deployed to AWS beanstalk.
Python - Cannot upgrade six, issue uninstalling previous version
34,912,892
1
5
6,127
0
python,installation,pip,upgrade,six
I, too, have had some issues with installing modules, and I sometimes find that it helps just to start over. In this case, it looks like you already have some of the 'six' module, but isn't properly set up, so if sudo pip uninstall six yields the same thing, go into your directory and manually delete anything related to six, and then try installing it. You may have to do some digging where you have your modules are stored (or have been stored, as pip can find them in different locations).
0
1
0
0
2016-01-20T23:40:00.000
2
0.099668
false
34,912,784
1
0
0
1
When I run sudo pip install --upgrade six I run into the issue below: 2016-01-20 18:29:48|optim $ sudo pip install --upgrade six Collecting six Downloading six-1.10.0-py2.py3-none-any.whl Installing collected packages: six Found existing installation: six 1.4.1 Detected a distutils installed project ('six') which we cannot uninstall. The metadata provided by distutils does not contain a list of files which have been installed, so pip does not know which files to uninstall. I have Python 2.7, and I'm on Mac OS X 10.11.1. How can I make this upgrade successful? (There are other kind of related posts, but they do not actually have a solution to this same error.) EDIT: I am told I can remove six manually by removing things from site-packages. These are the files in site-packages that begin with six: six-1.10.0.dist-info, six-1.9.0.dist-info, six.py, six.py. Are they all correct/safe to remove? EDIT2: I decided to remove those from site-packages, but it turns out the existing six that cannot be installed is actually in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python. There I see the files: six-1.4.1-py2.7.egg-info, six.py, six.pyc but doing rm on them (with sudo, even) gives Operation not permitted. So now the question is, how can I remove those files, given where they are?
Graceful reload of python tornado server
34,960,704
1
0
583
0
python,tornado,upgrade
Easy way, do it with nginx. Start a latest tornado server. Redirect all new connections to the new tornado server.(Change nginx configure file and reload with nginx -s reload) Tell the old tornado server shutdown itself if all connections are closed. Hard way If you want to change your server on the fly, maybe you could find a way by reading nginx's source code, figure out how nginx -s reload works, but I think you need to do lots of work.
0
1
0
0
2016-01-22T14:44:00.000
1
0.197375
false
34,949,364
0
0
0
1
I have an HTTP server created by the Tornado framework. I need to update/reload this server without any connection lost and shutdown. I have no idea how to do it. Could you get me any clue?
Switching between Anaconda and Anaconda3
37,636,425
7
4
14,234
0
python,anaconda
use the "activate" batch file activate c:\anaconda3 activate c:\anaconda2
0
1
0
0
2016-01-24T01:58:00.000
3
1
false
34,971,379
1
0
0
2
Is there an easy way to switch between using Anaconda (Python 2) and Anaconda3 (Python 3) from the command line? I am on Windows 10.
Switching between Anaconda and Anaconda3
46,770,419
0
4
14,234
0
python,anaconda
If you are using Linux/Mac OS, edit your ~/.bashrc. For example, if you do not wanna use anaconda3, comment the line which add path_to_anaconda3 to your system PATH.
0
1
0
0
2016-01-24T01:58:00.000
3
0
false
34,971,379
1
0
0
2
Is there an easy way to switch between using Anaconda (Python 2) and Anaconda3 (Python 3) from the command line? I am on Windows 10.
How to decrease to split data put in task queue, Google app engine with Python
34,976,107
0
0
96
0
python,google-app-engine,task-queue
Check the size of the payload (arguments) you are sending to the task queue. If it's more than a few KB in size you need to store it in the datastore and send the key of the object holding the data to the task queue
0
1
0
0
2016-01-24T12:52:00.000
2
0
false
34,976,025
0
0
1
2
Encounter an error "RequestTooLargeError: The request to API call datastore_v3.Put() was too large.". After looking through the code, it happens on the place where it is using task queue. So how can I split a large queue task into several smaller ones?
How to decrease to split data put in task queue, Google app engine with Python
34,977,778
0
0
96
0
python,google-app-engine,task-queue
The maximum size of a task is 100KB. That's a lot of data. It's hard to give specific advice without looking at your code, but I would mention this: If you pass a collection to be processed in a task in a loop, than the obvious solution is to split the entire collection into smaller chunks, e.g. instead of passing 1000 entities to one task, pass 100 entities to 10 tasks. If you pass a collection to a task that cannot be split into chunks (e.g. you need to calculate totals, averages, etc.), then don't pass this collection, but query/retrieve it in the task itself. Every task is saved back to the datastore, so you don't win much by passing the collection to the task - it has to be retrieved from the datastore anyway. If you pass a very large object to a task, pass only data that the task actually needs. For example, if your task sends an email message, you may want to pass Email, Name, and Message, instead of passing the entire User entity which may include a lot of other properties. Again, 100KB is a lot of data. If you are not using a loop to process many entities in your task, the problem with the task queue may indicate that there is a bigger problem with your data model in general if you have to push around so much data every time. You may want to consider splitting huge entities into several smaller entities.
0
1
0
0
2016-01-24T12:52:00.000
2
1.2
true
34,976,025
0
0
1
2
Encounter an error "RequestTooLargeError: The request to API call datastore_v3.Put() was too large.". After looking through the code, it happens on the place where it is using task queue. So how can I split a large queue task into several smaller ones?
Run python script in a remote machines as a sudo user
36,052,105
0
1
1,341
0
python,unix,sudo,remote-server
The other way is to use paramiko as below. un_con=paramiko.SSHClient() un_con.set_missing_host_key_policy(paramiko.AutoAddPolicy()) un_con.connect(host,username=user,key_filename=keyfile) stdin, stdout, stderr = un_con.exec_command(sudo -H -u sudo_user bash -c 'command')
0
1
0
1
2016-01-24T18:41:00.000
2
0
false
34,979,846
0
0
0
1
I pretty new to python programming, as part of my learning i had decided to start coding for a simple daily task which would save sometime, I'm done with most part of the script but now i see a big challenge in executing it , because i need to execute it a remote server with the sudo user access. Basically what i need is, login to remote system. run sudo su - user(no need of password as its a SSH key based login) run the code. logout with the result assigned to varible. I need the end result of the script stored in a variable so that i can use that back for verification.
Unable to find ipython after installation on Mac
34,984,334
0
1
2,531
0
python,macos,ipython
Try these shell commands. I'm using Debian, & don't have a Mac. which ipython Should show the dir where it is installed, eg, /usr/bin/ipython echo $PATH Should show a list of paths where the system looks for programs to execute; it should include the location of ipython, eg, /usr/bin for my example above. pip list Should include 'ipython' in the list of installed pkgs. pip show ipython Should show data about the installed ipython pkg. Your home directory should have a dir called '.ipython' if the pkg was installed (unless it was put somewhere else). If you don't find the program, maybe the install failed. Try again and watch for error messages.
0
1
0
0
2016-01-25T01:34:00.000
2
0
false
34,983,832
1
0
0
1
I installed ipython using pip. I have python 2.7 on my Mac. However I am unable to start up ipython. [17:26:01] ipython -bash: ipython: command not found Then thinking that maybe I need to run it from within python I even tried that [17:28:10] python import ipython Traceback (most recent call last): File "", line 1, in ImportError: No module named ipython Any idea what is wrong? Thanks.
How to fully access windows machine as an administrator using python
34,989,256
0
1
963
0
python,windows,admin,administration,administrator
For me the easiest solution is an administrator terminal instance. Press the start/window button Enter the search field Type in cmd and wait until under programs cmd.exe is found right click on that program click on the option where you can execute as an administrator Now your terminal has administrator rights. When you start a python skript inside that terminal the python interpreter has also amdin rights.
0
1
0
0
2016-01-25T08:33:00.000
1
0
false
34,988,109
0
0
0
1
I am on a project in which I need to have full access of each directory or file in the windows file system,I am using python for it.But I cant modify or access some files and totally inaccessible the C:/ drive with python,showing "permission denied". I want to know is there any kind of way to get the full access as administrator using python,please suggest and help.
How to get a socket FD according to the port occupied in Python?
34,989,073
1
0
429
0
python,linux,sockets,subprocess
It seems like a permission issue. The subprocess is probably running as an other user and therefore you will not have access to the process. Use sudo ps xauw |grep [processname] to figure as under what user the daemon process is running.
0
1
1
0
2016-01-25T09:08:00.000
2
0.099668
false
34,988,678
0
0
0
1
In my program, A serve-forever daemon is restarted in a subprocess. The program itself is a web service, using port 5000 by default. I don't know the detail of the start script of that daemon, but it seems to inherit the socket listening on port 5000. So if I were to restart my program, I'll find that the port is already occupied by the daemon process. Now I am considering to fine tune the subprocess function to close the inherited socket FD, but I don't know how to get the FD in the first place.
Spyder Python IDE thinks remote kernel has died when it hasn't; any ways to prevent that?
34,993,859
0
1
635
0
python,amazon-ec2,ipython,spyder
I often have the same problem, the easiest and fastest fix I have for you at the moment, is running the code in a new dedicated python console every time. this can easily be done by: 1) click the run settings icon (the wrench with green play/run buttton in the top of your screen) 2) Select the 2nd option (execute in a new dedicated python console) 3) press ok This will automatically run the code in a new console next time you press the run file button (f5), and should prevent the error message.
0
1
0
0
2016-01-25T12:17:00.000
1
0
false
34,992,439
0
0
0
1
I'm running Spyder on Windows connecting remotely to and Amazon EC2 ipython kernel. Whenever I run some operations that take more than a few seconds to run, I get the repeated message It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console. But my kernel is all fine and dandy. Sometimes I have to press Enter repeatedly to make it snap out of it, other times I have to restart the Spyder console and connect to my still-alive kernel. Any tips? Is there a way to disable the kernel-death check, or increasing the timeout? Thanks! :)
How to properly install python3 on Centos 7
35,004,508
0
3
2,685
0
python,centos
Do you have pip for python3, too? Try pip3 rather than pip. I assume your regular pip is just installing the modules for Python 2.x.
0
1
0
1
2016-01-25T23:32:00.000
2
0
false
35,004,466
0
0
0
1
I'm running Centos7 and it comes with Python2. I installed python3, however when I install modules with pip, python3 doesn't use them. I can run python3 by typing python3 at the CLI python (2.x) is located in /usr/bin/python python3 is located in /usr/local/bin/python3 I tried creating a link to python3 in /usr/bin/ as "python", but as expected, it didnt resolve anything. I renamed the current python to python2.bak It actually broke some command line functionality (tab to complete). I had to undo those changes to resolve. Suggestions welcome. Thanks.
create module for python package
35,494,125
0
0
108
0
python,python-2.7,module,lua,centos
I am not exactly sure what you mean by "How to create a module for kmos". You didn't mention which terminal you are using. However it will be definitely helpful to understand the mechanism behind finding executables and python import. If you want to execute the kmos command-line interface (e.g. kmos export ...) you need to make sure that wherever the kmos shell client is is in your $PATH variable. When you installed kmos (pip install --user --upgrade kmos) it should tell where it went. That directory needs to show up when you run echo $PATH. Most likely something like ~/.local/bin. If it doesn't show up you may want to put export PATH=${PATH}:~/.local/bin into your ~/.bashrc or the corresponding syntax in your echo $SHELL configuration file. The other location is where the python module gets copied to. When you do the pip installation it should print this out as well. Most likely something like ~/.local/lib/pythonXY/site-packages. When you run python -c "import sys; print(sys.path)" it should include given directory. You can automatically add this directory again using your echo ${SHELL} configuration file like export PYTHONPATH=${PYTHONPATH}:~/.local/lib/pythonXY/site-packages. If you can already import kmos from python then python -c "import kmos; print(kmos.__file__)" will tell you where it found it.
0
1
0
0
2016-01-26T20:31:00.000
1
0
false
35,023,378
1
0
0
1
I have recently installed kmos, a python package using pip in my user account on my institute cluster. How to create a module for kmos and set the path to the directory such that python accesses the library. Currently, I am giving the path to the kmos binary while running the program. Linux distro: Cent OS Module support: Lua-based Lmod environmental modules system
Using Python GUI Editor on Ubuntu AWS
35,028,687
2
2
590
0
python-2.7,ubuntu,amazon-ec2
You could handle things a few ways, but I would simply mount the instance's filesystem locally, and keep a Putty (Windows) terminal open to execute commands remotely. Trying to install a GUI on the EC2 instance is probably more trouble than it's worth, and a waste of resources. In most cases, I build everything inside a local (small) Ubuntu Server VM while I'm working on it, until it's ready for some sort of deployment before moving to an EC2/DO Droplet/What-have-you. The principles are basically the same - having to work with a machine that you don't have immediate full command of - and it's cheaper, to boot.
0
1
0
1
2016-01-27T02:13:00.000
1
0.379949
false
35,027,646
0
0
0
1
I have a server instance (Ubuntu) running on AWS EC2. What's the best way to use GUI-based Python editor (e.g., Spyder, Sublimetext, PyCharm) with that server instance?
Connecting to SFTP server via Windows' Command Prompt
35,033,131
1
1
15,990
0
python,windows,batch-file,sftp,fabric
The built in FTP command doesn't have a facility for security. You can use winscp, an open source free SFTP client and FTP client for Windows.
0
1
0
1
2016-01-27T09:09:00.000
2
0.099668
false
35,032,994
0
0
0
1
I'm wondering if there's any way to connect SFTP server with Windows' Command Prompt, by only executing batch file. Do I need to install additional software? which software? The purpose is to do pretty basic file operations (upload, delete, rename) on remote SFTP server by executing a batch file. And by the way, I have heard about python's Fabric library, and I wonder whether it's better solution than the batch script for the mentioned basic file operations? Thanks a lot!
Ipv6 UDP host address for bind
35,063,138
1
0
422
0
python,sockets,udp,raspberry-pi,ipv6
You can use host ='fe80::ba27:ebff:fed4:5691', assuming you only have one link. Link-Local addresses (Link-Local scope) are designed to be used for addressing on a single link for purposes such as automatic address configuration, neighbor discovery or when no routers are present. Routers must not forward any packets with Link-Local source or destination addresses to other links. So if you are sending data from a server to a raspberry pi (1 link), you can use the link-local scope for you IPv6 address. host = 'ff02::1:ffd4:5691' is the link-local multicast scope, unless you have a reason to send multicast, there is no need.
0
1
1
1
2016-01-27T15:52:00.000
1
1.2
true
35,042,006
0
0
0
1
I am interested to do socket programming. I would like send and receive Ipv6 UDP server socket programming for raspberry (conneted with ethernet cable and opened in Putty). After surfing coulpe of sites I have got confusion with IPv6 UDP host address. Which type of host address should I use to send and receive message ipv6 UDP message. is the link local address example: host ='fe80::ba27:ebff:fed4:5691';//link local address to Tx and Rx from Raspberry or host = 'ff02::1:ffd4:5691' Thank you so much. Regards, Mahesh
Run shell command in pdb mode
51,962,231
8
10
5,045
0
python,shell,pdb
Simply use the "os" module and you will able to easily execute any os command from within pdb. Start with: (Pdb) import os And then: (Pdb) os.system("ls") or even (Pdb) os.system("sh") the latest simply spawns a subshell. Exiting from it returns back to debugger. Note: the "cd" command will have no effect when used as os.system("cd dir") since it will not change the cwd of the python process. Use os.chdir("/path/to/targetdir") for that.
0
1
0
0
2016-01-27T16:02:00.000
3
1
false
35,042,198
0
0
0
1
I want to run cd and ls in python debugger. I try to use !ls but I get *** NameError: name 'ls' is not defined
Azure Service Bus SDK for Python results in Read Timeout when sending a message to topic
35,061,648
-1
1
423
0
python,azure,azureservicebus
Per my experience, I think it's a program flow problem of your embedded application. You can try to add a testing function that ping the service bus host interval some seconds until the network fine to return a boolean value to start a new connection after the device switches the network adaptor. Meanwhile, count for pinging till a specified value to call a shell command like service network restart or ifconfig <eth-id> down && ifconfig <eth-id> up to restart the related network adaptor. It's just an idea. Could you supply some codes for providing more useful help?
0
1
0
0
2016-01-27T18:48:00.000
1
-0.197375
false
35,045,604
0
0
0
1
In my application, I send a message to a topic based on a local event. This works quite well until I run into a network issue. On the network side, my device is going through an access point that provides primary/secondary connection to the internet. The primary connection is through an ADSL line but if that fails, it switches over to an LTE network. When the switch-over occurs, the IP address of my device stays unchanged (as that is on the local network and assigned through DHCP). When this switch-over occurs, I find that there is an error with the send command. I get my local event and try to send a message to the service bus. The first send results in a 'ReadTimeout' but a subsequent send is fine. I then get another local event and try another send and the process repeats itself. If I reboot the device then everything works fine. Here is the stack-trace: File "/usr/sbin/srvc/sb.py", line 420, in ReadTimeout: HTTPSConnectionPool(host='****.servicebus.windows.net', port= 443): Read timed out. (read timeout=65) Traceback (most recent call last): File "/usr/sbin/srvc/sb.py", line 420, in peek_lock=False, timeout=sb_timeout) File "/usr/local/lib/python2.7/dist-packages/azure/servicebus/servicebusservic e.py", line 976, in receive_subscription_message timeout) File "/usr/local/lib/python2.7/dist-packages/azure/servicebus/servicebusservic e.py", line 762, in read_delete_subscription_message response = self._perform_request(request) File "/usr/local/lib/python2.7/dist-packages/azure/servicebus/servicebusservic e.py", line 1109, in _perform_request resp = self._filter(request) File "/usr/local/lib/python2.7/dist-packages/azure/servicebus/_http/httpclient .py", line 181, in perform_request self.send_request_body(connection, request.body) File "/usr/local/lib/python2.7/dist-packages/azure/servicebus/_http/httpclient .py", line 145, in send_request_body connection.send(None) File "/usr/local/lib/python2.7/dist-packages/azure/servicebus/_http/requestscl ient.py", line 81, in send self.response = self.session.request(self.method, self.uri, data=request_bod y, headers=self.headers, timeout=self.timeout) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 457, in req uest resp = self.send(prep, **send_kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 569, in sen d r = adapter.send(request, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 422, in sen d raise ReadTimeout(e, request=request) ReadTimeout: HTTPSConnectionPool(host='****.servicebus.windows.net', port= 443): Read timed out. (read timeout=65)
How to select which version of python I am running on Linux?
35,047,953
2
3
7,476
0
python,linux
Use update-alternatives --config python and shoose python2.7 from choices. If you need to remove it use update-alternatives --remove python /usr/bin/python2.7.
0
1
0
0
2016-01-27T20:48:00.000
4
0.099668
false
35,047,691
1
0
0
1
The version of Linux I am working on has python 2.6 by default, and we installed 2.7 on it in a separate folder. If I want to run a .py script, how do I tell it to use 2.7 instead of the default?
python script to edit a file in vim
35,048,982
4
0
1,493
0
python,python-3.x,vim
It's not clear why you want to do this. To truly run an interactive program, you'll have to create a pseudo-tty and manage it from your python script - not for the faint of heart. If you just want to insert text into an existing file, you can do that directly from python, using the file commands. Or you could invoke a program like sed, the "stream editor", that is intended to do file editing in a scripted fashion. The sed command supports a lot of the ex command set (which is the same base command set that vi uses) so i, c, s, g, a, all work.
0
1
0
0
2016-01-27T21:59:00.000
3
0.26052
false
35,048,891
1
0
0
1
I want to make a python script that: opens a file, executes the command i, then writes 2 lines of code, hits escape executes the command ZZ. I was thinking along the lines of os.system("vi program") then os.system("i") and os.system("code"), but that didn't work because you can only execute commands. Thank you!
Run python program from terminal
35,049,070
0
1
7,570
0
python,terminal
When you type "python", your path is searched to run this version. But, if you specify the absolute path of the other python, you run it the way you want it. Here, in my laptop, I have /home/user/python3_4 and /home/user/python2_7. If I type python, the 3.4 version is executed, because this directory is set in my path variable. When I want to test some scripts from the 2.7 version, I type in the command line: /home/user/python2_7/bin/python script.py. (Both directory were chosen by me. It's not the default for python, of course). I hope it can help you.
0
1
0
0
2016-01-27T22:06:00.000
3
0
false
35,048,996
1
0
0
2
I have downloaded a python program from git. This program is python 3. On my laptop i have both python 2.7 and python 3.4. Python 2.7 is default version. when i want run this program in terminal it gives some module errors because of it used the wrong version. how can i force an name.py file to open in an (non) default version of python. I have tried so search on google but this without any result because of lack of search tags. also just trying things like ./name.py python3 but with same result(error)
Run python program from terminal
35,964,107
0
1
7,570
0
python,terminal
The Method of @Tom Dalton and @n1c9 work for me! python3 name.py
0
1
0
0
2016-01-27T22:06:00.000
3
1.2
true
35,048,996
1
0
0
2
I have downloaded a python program from git. This program is python 3. On my laptop i have both python 2.7 and python 3.4. Python 2.7 is default version. when i want run this program in terminal it gives some module errors because of it used the wrong version. how can i force an name.py file to open in an (non) default version of python. I have tried so search on google but this without any result because of lack of search tags. also just trying things like ./name.py python3 but with same result(error)
Cannot install Python Package with docker-compose
35,066,625
12
6
9,159
0
python,docker,docker-compose
It looks like you ran the pip install in a one-off container. That means your package isn't going to be installed in subsequent containers created with docker-compose up or docker-compose run. You need to install your dependencies in the image, usually by adding the pip install command to your Dockerfile. That way, all containers created from that image will have the dependencies available.
0
1
0
0
2016-01-28T16:06:00.000
1
1.2
true
35,066,307
0
0
1
1
I am running a Django project with docker. Now I want to install a Python package inside the Docker container and run the following command: docker-compose django run pip install django-extra-views Now when I do docker-compose up, I get an error ImportError: No module named 'extra_views'. docker-compose django run pip freeze doesn't show the above package either. Am I missing something?
Performing a blocking request in django view
35,083,287
2
5
2,469
0
python,django,celery
The usual solution here is to offload the task to celery, and return a "please wait" response in your view. If you want, you can then use an Ajax call to periodically hit a view that will report whether the response is ready, and redirect when it is.
0
1
0
0
2016-01-29T11:15:00.000
3
0.132549
false
35,083,133
0
0
1
1
In one of the views in my django application, I need to perform a relatively lengthy network IO operation. The problem is other requests must wait for this request to be completed even though they have nothing to do with it. I did some research and stumbled upon Celery but as I understand, it is used to perform background tasks independent of the request. (so I can not use the result of the task for the response to the request) Is there a way to process views asynchronously in django so while the network request is pending other requests can be processed? Edit: What I forgot to mention is that my application is a web service using django rest framework. So the result of a view is a json response not a page that I can later modify using AJAX.
Vim python3 integration on mac
35,093,602
1
2
3,440
0
macos,python-3.x,vim
Finally found the solution - $ brew install vim --with-python3
0
1
0
0
2016-01-29T14:29:00.000
3
0.066568
false
35,086,949
1
0
0
3
I'm trying to work out to integrate Python3 into Vim, I know I need to do it when compiling vim but I cant seem to get it right. I'm using homebrew to install with the following script: brew install vim --override-system-vim --with-python3 It installs vim however when i check the version, python3 is still not supported.
Vim python3 integration on mac
47,591,845
1
2
3,440
0
macos,python-3.x,vim
I thought I had the same issue but realised I needed to re-start the shell. If the problem still persists, it may be that you have older versions that homebrew is still trying to install. brew cleanup would remove older bottles and perhaps allow you to install the latest. If this is still giving you trouble, I found removing vim with brew uninstall --force vim and then reinstalling with brew install vim --override-system-vim --with-python3 worked for me. EDIT 2018-08-22 Python 3 is now default when compiling vim. Therefore the command below should integrate vim with Python 3 automatically. brew install vim --override-system-vim
0
1
0
0
2016-01-29T14:29:00.000
3
0.066568
false
35,086,949
1
0
0
3
I'm trying to work out to integrate Python3 into Vim, I know I need to do it when compiling vim but I cant seem to get it right. I'm using homebrew to install with the following script: brew install vim --override-system-vim --with-python3 It installs vim however when i check the version, python3 is still not supported.
Vim python3 integration on mac
56,487,676
0
2
3,440
0
macos,python-3.x,vim
This worked for me with the latest OS for mac at this date. Hope it works for you. brew install vim python3
0
1
0
0
2016-01-29T14:29:00.000
3
0
false
35,086,949
1
0
0
3
I'm trying to work out to integrate Python3 into Vim, I know I need to do it when compiling vim but I cant seem to get it right. I'm using homebrew to install with the following script: brew install vim --override-system-vim --with-python3 It installs vim however when i check the version, python3 is still not supported.
Google Cloud Debugger for Python App Engine module says "Deployment revision unknown"
35,107,768
1
0
147
0
google-app-engine,google-app-engine-python,google-cloud-debugger
It looks like you did everything correctly. The "Failed to update the snapshot" error shows up when there is some problem on the Cloud Debugger backend. Please contact the Cloud Debugger team through cdbg-feedback@google.com or submit feedback report in Google Developer Console.
0
1
0
0
2016-01-29T17:46:00.000
1
1.2
true
35,090,793
0
0
1
1
I'm trying to get the Google Cloud Debugger to work on my Python App Engine module. I've followed the instructions and: Connected to my Bitbucket hosted repository. Generated the source-context.json and source-contexts.json using gcloud preview app gen-repo-info-file Uploaded using appcfg.py update However when I try to set a snapshot using the console, there is message saying: The selected debug target does not have source revision information. The source shown here may not match the deployed source. And when I try to set the snapshot point, I get the error: Failed to update the snapshot
How to run another python file when a python file is finished
35,123,125
2
0
1,210
0
python,scheduled-tasks
The easiest way is going to be to do this in the shell, not using pure python. Just run python test1.py && python test2.py or python test1.py; python test2.py. The one with && won't run test2.py if test1 fails while the one using ; will run both regardless.
0
1
0
0
2016-02-01T03:52:00.000
3
0.132549
false
35,122,755
1
0
0
1
For example, I have two python file test1.py and test2.py. At first, test1.py will run. And I want test2.py to be run when the test1.py is finished. I want the two python files run in different shell. That means test1.py should be closed when it is finished. All the help is appreciated! Thank you! I want this task to be some kind of scheduler task. At 12:00 pm the test1.py is executed. And after test1.py is finished, I want to execute test2.py automatically
Fail to scrapyd-deploy
48,967,865
0
1
1,047
0
python,scrapy,scrapyd
Facing the same issue, the solution was hastened by reviewing scrapyd's error log. The logs are possibly located in the folder /tmp/scrapydeploy-{six random letters}/. Check out stderr. Mine contained a permissions error: IOError: [Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/binary_agilo-1.3.15-py2.7.egg/EGG-INFO/entry_points.txt'. This happens to be a packaged that was installed system-wide last week, thus leading to scrapyd-deploy failing to execute. Removing the package fixes the issue. (Instead, the binary_agilo package is installed in a virtualenv.)
0
1
0
0
2016-02-01T07:05:00.000
2
0
false
35,124,720
0
0
1
1
Traceback (most recent call last): File "/usr/local/bin/scrapyd-deploy", line 273, in main() File "/usr/local/bin/scrapyd-deploy", line 95, in main egg, tmpdir = _build_egg() File "/usr/local/bin/scrapyd-deploy", line 240, in _build_egg retry_on_eintr(check_call, [sys.executable, 'setup.py', 'clean', '-a', 'bdist_egg', '-d', d], stdout=o, stderr=e) File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/python.py", line 276, in retry_on_eintr return function(*args, **kw) File "/usr/lib/python2.7/subprocess.py", line 540, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/python', 'setup.py', 'clean', '-a', 'bdist_egg', '-d', '/tmp/scrapydeploy-sV4Ws2']' returned non-zero exit status 1
Jupyter From Cmd Line in Windows
65,552,980
0
22
60,064
0
cmd,ipython,jupyter
To add Jupyter as the windows CLI Command. You need to add the "C:\Users\user\AppData\Roaming\Python\Python38\Scripts" into your environment path. This was to solve for me.
0
1
0
0
2016-02-01T15:28:00.000
8
0
false
35,134,225
1
0
0
6
I cannot get jupyter running from my Command line using: jupyter notebook jupyter is not recognised as an internal or external command, operable program or batch file' But I can get it running from pycharm (slick but with issues). When I take the kernel's IP and Port from pycharm and paste it into my browser I can get it running from there. I cannot use anaconda because of Arcpy, and I have dug around the jupyter files for some hints. I'm assuming I need to add something to my path?
Jupyter From Cmd Line in Windows
61,023,501
4
22
60,064
0
cmd,ipython,jupyter
This is an old question, but try using python -m notebook This was the only way I was able to get jupyter to start after installing it on the windows 10 command line using pip. I didn't try touching the path.
0
1
0
0
2016-02-01T15:28:00.000
8
0.099668
false
35,134,225
1
0
0
6
I cannot get jupyter running from my Command line using: jupyter notebook jupyter is not recognised as an internal or external command, operable program or batch file' But I can get it running from pycharm (slick but with issues). When I take the kernel's IP and Port from pycharm and paste it into my browser I can get it running from there. I cannot use anaconda because of Arcpy, and I have dug around the jupyter files for some hints. I'm assuming I need to add something to my path?
Jupyter From Cmd Line in Windows
49,664,386
-1
22
60,064
0
cmd,ipython,jupyter
Go to Anaconda Command Prompt and type jupyter notebook and wait for 30 seconds. You can see that your local host site will automatically open.
0
1
0
0
2016-02-01T15:28:00.000
8
-0.024995
false
35,134,225
1
0
0
6
I cannot get jupyter running from my Command line using: jupyter notebook jupyter is not recognised as an internal or external command, operable program or batch file' But I can get it running from pycharm (slick but with issues). When I take the kernel's IP and Port from pycharm and paste it into my browser I can get it running from there. I cannot use anaconda because of Arcpy, and I have dug around the jupyter files for some hints. I'm assuming I need to add something to my path?
Jupyter From Cmd Line in Windows
44,598,275
0
22
60,064
0
cmd,ipython,jupyter
For future reference: the first hurdle of starting with Python is to install it. I downloaded the Anaconda 4.4 for Windows, Python 3.6 64-bit installer. After sorting the first hurdle of updating the "path" Environmental Variable, and running (at the Python prompt) "import pip", all the instructions I found to install the IPython Notebook generated errors. Submitting the commands "ipython notebook" or "jupyther notebook" from the Windows Command Prompt or the Python prompt generated error messages. Then I found that the Anaconda installation consists of a host of applications, on of them being the "Jupyter Notebook" application accessible from the Start menu. This application launch (first a shell, then) a browser page. The application points to a shortcut in , a directory set during the Anaconda installation. The shortcut itself refers to a few locations. Ready for next hurdle.
0
1
0
0
2016-02-01T15:28:00.000
8
0
false
35,134,225
1
0
0
6
I cannot get jupyter running from my Command line using: jupyter notebook jupyter is not recognised as an internal or external command, operable program or batch file' But I can get it running from pycharm (slick but with issues). When I take the kernel's IP and Port from pycharm and paste it into my browser I can get it running from there. I cannot use anaconda because of Arcpy, and I have dug around the jupyter files for some hints. I'm assuming I need to add something to my path?
Jupyter From Cmd Line in Windows
59,917,360
0
22
60,064
0
cmd,ipython,jupyter
If you use Python 3, try running the command from your virtual environment and or Anaconda command instead of your computer's OS CMD.
0
1
0
0
2016-02-01T15:28:00.000
8
0
false
35,134,225
1
0
0
6
I cannot get jupyter running from my Command line using: jupyter notebook jupyter is not recognised as an internal or external command, operable program or batch file' But I can get it running from pycharm (slick but with issues). When I take the kernel's IP and Port from pycharm and paste it into my browser I can get it running from there. I cannot use anaconda because of Arcpy, and I have dug around the jupyter files for some hints. I'm assuming I need to add something to my path?
Jupyter From Cmd Line in Windows
48,469,315
8
22
60,064
0
cmd,ipython,jupyter
Try to open it using the Anaconda Prompt. Just type jupyter notebook and press Enter. Anaconda Prompt has existed for a long time and is the correct way of using Anaconda. May be you have a broken installation somehow. Try this, if the above doesn't work- In the Command Prompt type, pip3 install jupyter if you're using Python3 Else, if you are using Python2.7 then type pip install jupyter. ...Some installation should happen... Now retry typing jupyter notebook in the CMD, it should work now.
0
1
0
0
2016-02-01T15:28:00.000
8
1
false
35,134,225
1
0
0
6
I cannot get jupyter running from my Command line using: jupyter notebook jupyter is not recognised as an internal or external command, operable program or batch file' But I can get it running from pycharm (slick but with issues). When I take the kernel's IP and Port from pycharm and paste it into my browser I can get it running from there. I cannot use anaconda because of Arcpy, and I have dug around the jupyter files for some hints. I'm assuming I need to add something to my path?
Update strategy Python application + Ember frontend on BeagleBone
35,147,597
0
1
141
0
python,deployment,updates,beagleboneblack,yocto
A natural strategy would be to make use of the package manager also used for the rest of the system. The various package managers of Linux distributions are not closed systems. You can create your own package repository containing just your application/scripts and add it as a package source on your target. Your "updater" would work on top of that. This is also a route you can go when using yocto.
0
1
0
1
2016-02-01T17:02:00.000
1
0
false
35,136,140
0
0
1
1
For the moment I've created an Python web application running on uwsgi with a frontend created in EmberJS. There is also a small python script running that is controlling I/O and serial ports connected to the beaglebone black. The system is running on debian, packages are managed and installed via ansible, the applications are updated also via some ansible scripts. With other words, updates are for the moment done by manual work launching the ansible scripts over ssh. I'm searching now a strategy/method to update my python applications in an easy way and that can also be done by our clients (ex: via webinterface). A good example is the update of a router firmware. I'm wondering how I can use a similar strategy for my python applications. I checked Yocto where I can build my own linux with but I don't see how to include my applications in those builds, and I don't wont to build a complete image in case of hotfixes. Anyone who has a similar project and that would like to share with me some useful information to handle some upgrade strategies/methods?
How to delete a file in a specific folder in Python?
35,143,988
1
0
2,385
0
python,file,directory
The default folder is your current working directory, likely to be where you started your python interpreter. You can check it by print(os.getcwd()) to display it. To change the current working directory, you can run os.chdir('C:/MyFolder'), where you can swap C:/MyFolder to any desired path you want.
0
1
0
0
2016-02-02T02:16:00.000
1
0.197375
false
35,143,935
1
0
0
1
I know you can use os.remove(myfile) to delete files. But what is the default folder location of this file? How do I change the folder directory?
Convert python script to dmg program from windows
35,152,961
0
3
7,803
0
python,ios,py2exe
Use www.pyinstaller.org This piece of software makes executables from python scrips for Windows, Linux and OSX.
0
1
0
0
2016-02-02T11:46:00.000
2
0
false
35,152,616
0
0
0
1
I have a Python script I would like to transform into an executable for Windows and a dmg file to be run on Apple computers. For the Windows systems I have found py2exe (only valid for Windows) and for the Apple ones py2app (it can only be run on Io systems). My question is whether there is some way to create a dmg file of the Python script running a program from a Windows system (even though the program cannot be run). Is it possible? Thank you in advance!
How to install a second/third/python on my system?
35,175,640
0
1
49
0
python,packaging,software-distribution,software-packaging
I use virtualenv for multiple Python installations setuptools for packaging (via pip)
0
1
0
0
2016-02-03T11:01:00.000
3
0
false
35,175,209
1
0
0
1
How can I install (on Linux) a plain Python distribution to e.g. /opt/myPythonProject/python? When I afterwards install packages (e.g. pip) all packages should go in /opt/myPythonProject. It should simply ignore system python and it's packages. My ultimate goal is to place my own code in /opt/myPythonProject/mycode, then zip op the entire project root directory, to deploy it on customer machine. Does this in general work (assuming my own arch/OS/etc. is the same). So the bigger question is: can I deliver python/packages/my own code in 1 big zip? If yes, what do I need to take into account? If not: what is the easiest solution to distribute a Python application together with the runtimes/packages and to get this deployed as application user (non root).
Creating Contingency Solution Output File for PSS/E using Python 2.7
36,043,728
0
1
1,112
0
python,python-2.7
@Magalhaes, the auxiliary files *.sub, *.mon and *.con are input files. You have to write them; PSSE doesn't generate them. Your recording shows that you defined a bus subsystem twice, generated a *.dfx from existing auxiliary files, ran an AC contingency solution, then generated an *.acc report. So when you did this recording, you must have started with already existing auxiliary files.
0
1
0
0
2016-02-03T17:16:00.000
1
1.2
true
35,183,538
0
1
0
1
I'm using python to interact with PSS/E (siemens software) and I'm trying to create *.acc file for pss/e, from python. I can do this easily using pss/e itself: 1 - create *.sub, *.mon, *.con files 2 - create respective *.dfx file 3 - and finally create *.acc file The idea is to perform all these 3 tasks automatically, using python. So, using the record tool from pss/e I get this code: psspy.bsys(0,0,[ 230., 230.],1,[1],0,[],0,[],0,[]) psspy.bsys(0,0,[ 230., 230.],1,[1],0,[],0,[],0,[]) psspy.dfax([1,1],r"""PATH\reports.sub""",r"""PATH\reports.mon""",r"""PATH\reports.con""",r"""PATH\reports.dfx""") psspy.accc_with_dsp_3( 0.5,[0,0,0,1,1,2,0,0,0,0,0],r"""IEEE""",r"""PATH\reports.dfx""",r"""PATH\reports.acc""","","","") psspy.accc_single_run_report_4([1,1,2,1,1,0,1,0,0,0,0,0],[0,0,0,0,6000],[ 0.5, 5.0, 100.0,0.0,0.0,0.0, 99999.],r"""PATH\reports.acc""") It happens that when I run this code on python, the *.sub, *.mon, *.con and *.dfx files are not created thus the API accc_single_run_report_4() reports an error. Can anyone tell me why these files aren't being created with this code? Thanks in advance for your time
How to check whether or not a python script is up?
35,202,372
1
0
270
0
python,linux,python-3.x
You can use runit supervisor monit systemd (i think) Do not hack this with a script
0
1
0
1
2016-02-04T13:23:00.000
5
0.039979
false
35,202,184
0
0
0
3
I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron. Is there any way to check whether or not it's running?
How to check whether or not a python script is up?
35,202,314
1
0
270
0
python,linux,python-3.x
Create a script (say check_process.sh) which will Find the process id for your python script by using ps command. Save it in a variable say pid Create an infinite loop. Inside it, search for your process. If found then sleep for 30 or 60 seconds and check again. If pid not found, then exit the loop and send mail to your mail_id saying that process is not running. Now call check_process.sh by a nohup so it will run in background continuously. I implemented it way back and remember it worked fine.
0
1
0
1
2016-02-04T13:23:00.000
5
0.039979
false
35,202,184
0
0
0
3
I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron. Is there any way to check whether or not it's running?
How to check whether or not a python script is up?
35,202,268
1
0
270
0
python,linux,python-3.x
Try this and enter your script name. ps aux | grep SCRIPT_NAME
0
1
0
1
2016-02-04T13:23:00.000
5
0.039979
false
35,202,184
0
0
0
3
I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron. Is there any way to check whether or not it's running?
Python shell is open but not .py file?
35,233,580
0
0
88
0
python,shell,python-idle
When you double click on a .py file, it will run it using the Python Interpreter. You can right-click on the file instead and choose to open it in IDLE.
0
1
0
0
2016-02-05T21:01:00.000
1
0
false
35,233,419
1
0
0
1
I am using python 3.4.2 IDLE on windows. When I open the IDLE shell and then open .py file, then it works, but when I try to open the .py file by double cliking, it just doesn't open, or proceed anything. Looks like as if nothing has happened. I would like to open .py file and then just press F5 to see what is going on rather than individually open all the file (I am still beginner to python, I know I can use pycharm, but at this point, just that would be good enough)
Is there a better way to control gdb other than using command-line tools/libraries such as pexpect in python?
35,244,385
0
0
137
0
python,multithreading,gdb,pexpect
I found a library called python-ptrace, it seems working for now(with a few problems I never faced while using gdb).
0
1
0
0
2016-02-05T23:05:00.000
2
0
false
35,235,049
1
0
0
2
I'm trying to develop a program that uses gdb for it's basic debugging purposes. It executes the gdb from the command line, attaches to the target process and gives some specific commands, then reads the std output. Everything seemed good on paper at first so I started out with python and pexpect. But recently, while thinking about future implementations, I've encountered a problem. Since I can only execute one command at a time from the command-line(there can be only one gdb instance per process), the threads that request data constantly to refresh some UI element will lead to chaos eventually. Think about it: 1-)GDB stops the program to execute commands 2-)blocks the other threads while executing the code 3-)GDB continues the program after execution finishes 4-)One of the waiting threads will try to use GDB immediately 5-)go to 1 and repeat The process we'll work on will freeze every 0.5 sec, this would be unbearable. So, the thing I want to achieve is multi-threading while executing the commands. How can I do it? I thought about using gdb libraries but since I use python and those codes are written in C, it left a question mark on my head about compatibility.
Is there a better way to control gdb other than using command-line tools/libraries such as pexpect in python?
35,245,056
1
0
137
0
python,multithreading,gdb,pexpect
There are two main ways to script gdb. One way is to use the gdb MI ("Machine Interface") protocol. This is a specialized input and output mode that gdb has that is intended for programmatic use. It has some warts but is "usable enough" - it is what most of the gdb GUIs use. The other way to do this is to write Python scripts that run inside gdb, using gdb's Python API. This approach is often simpler to program, but on the downside the Python API is missing some useful pieces, so sometimes this can't be done, depending on exactly what you're trying to accomplish.
0
1
0
0
2016-02-05T23:05:00.000
2
1.2
true
35,235,049
1
0
0
2
I'm trying to develop a program that uses gdb for it's basic debugging purposes. It executes the gdb from the command line, attaches to the target process and gives some specific commands, then reads the std output. Everything seemed good on paper at first so I started out with python and pexpect. But recently, while thinking about future implementations, I've encountered a problem. Since I can only execute one command at a time from the command-line(there can be only one gdb instance per process), the threads that request data constantly to refresh some UI element will lead to chaos eventually. Think about it: 1-)GDB stops the program to execute commands 2-)blocks the other threads while executing the code 3-)GDB continues the program after execution finishes 4-)One of the waiting threads will try to use GDB immediately 5-)go to 1 and repeat The process we'll work on will freeze every 0.5 sec, this would be unbearable. So, the thing I want to achieve is multi-threading while executing the commands. How can I do it? I thought about using gdb libraries but since I use python and those codes are written in C, it left a question mark on my head about compatibility.
Conda command not found
70,835,470
0
135
435,658
0
python,zsh,anaconda,miniconda
I have encountered this problem lately and I have found a solution that worked for me. It is possible that your current user might not have permissions to anaconda directory, so check if you can read/write there, and if not, then change the files owner by using chown.
0
1
0
0
2016-02-06T20:58:00.000
21
0
false
35,246,386
1
0
0
8
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
66,254,477
1
135
435,658
0
python,zsh,anaconda,miniconda
export PATH="~/anaconda3/bin":$PATH
0
1
0
0
2016-02-06T20:58:00.000
21
0.009524
false
35,246,386
1
0
0
8
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
51,863,203
5
135
435,658
0
python,zsh,anaconda,miniconda
I had the same issue. I just closed and reopened the terminal, and it worked. That was because I installed anaconda with the terminal open.
0
1
0
0
2016-02-06T20:58:00.000
21
0.047583
false
35,246,386
1
0
0
8
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
67,916,328
1
135
435,658
0
python,zsh,anaconda,miniconda
It can be a silly mistake, make sure that you use anaconda3 instead of anaconda in the export path if you installed so.
0
1
0
0
2016-02-06T20:58:00.000
21
0.009524
false
35,246,386
1
0
0
8
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
70,267,089
0
135
435,658
0
python,zsh,anaconda,miniconda
This worked for me on CentOS and miniconda3. Find out which shell you are using echo $0 conda init bash (could be conda init zsh if you are using zsh, etc.) - this adds a path to ~/.bashrc Reload command line sourc ~/.bashrc OR . ~/.bashrc
0
1
0
0
2016-02-06T20:58:00.000
21
0
false
35,246,386
1
0
0
8
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
65,051,111
-1
135
435,658
0
python,zsh,anaconda,miniconda
MacOSX: cd /Users/USER_NAME/anaconda3/bin && ./activate
0
1
0
0
2016-02-06T20:58:00.000
21
-0.009524
false
35,246,386
1
0
0
8
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
44,342,045
28
135
435,658
0
python,zsh,anaconda,miniconda
Maybe you need to execute "source ~/.bashrc"
0
1
0
0
2016-02-06T20:58:00.000
21
1
false
35,246,386
1
0
0
8
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
46,866,740
23
135
435,658
0
python,zsh,anaconda,miniconda
Sometimes, if you don't restart your terminal after you have installed anaconda also, it gives this error. Close your terminal window and restart it. It worked for me now!
0
1
0
0
2016-02-06T20:58:00.000
21
1
false
35,246,386
1
0
0
8
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Why do chat applications have to be asynchronous?
35,250,150
2
3
1,314
0
python,django,chat,tornado
You certainly can develop a synchronous chat app, you don't necessarily need to us an asynchronous framework. but it all comes down to what do you want your app to do? how many people will use the app? will there be multiple users and multiple chats going on at the same time?
0
1
0
0
2016-02-07T04:29:00.000
3
0.132549
false
35,249,741
0
0
1
1
I need to implement a chat application for my web service (that is written in Django + Rest api framework). After doing some google search, I found that Django chat applications that are available are all deprecated and not supported anymore. And all the DIY (do it yourself) solutions I found are using Tornado or Twisted framework. So, My question is: is it OK to make a Django-only based synchronous chat application? And do I need to use any asynchronous framework? I have very little experience in backend programming, so I want to keep everything as simple as possible.
I cannot find python35_d.lib
36,926,551
12
4
4,011
0
c,python-3.x
Maybe a little too late, but I found a work around for the missing 'python3x_d.lib' : When installing the python with pythoninstaller.exe, choose the advanced setup options in the first command window of the installation wizard, there choose the option "download debug binaries", then the file python3x_d.lib is automatically installed. I faced this error when trying to build opencv with python bindings
1
1
0
0
2016-02-07T05:43:00.000
1
1
false
35,250,175
0
0
0
1
I have downloaded the 3.5 version of python on my windows 7 home premium computer with version 6.1 software. I wish to use a C main program with python library extensions. I have aded the path to the include folder and the library folder to the dev studio c-compiler. I am testing with the supplied test program that prints out the time but I get a compile error. While it can find Python.h, it can't find python35_d.lib. I can't either. Is it missing from the download or is this another name for a one of the libraries in the download? Thanks
Is is possible to select clang for compiling CPython extensions on Linux?
35,265,868
0
3
189
0
clang,cpython
There is a environment variable for that. CC=clang python setup.py build Both of compiled binaries are compatible with CPython
0
1
0
1
2016-02-08T01:14:00.000
1
1.2
true
35,261,188
0
0
0
1
All's in the title: I'd like to try using clang for compiling a C extension module for CPython on Linux (CPython comes from the distro repositories, and is built with gcc). Do distutils/setuptools support this? Does the fact that CPython and the extension are built with two different compilers matter? Thanks.
Python: Command line arguments not read?
35,276,219
5
1
425
0
python,command-line,sys
Are you using a shell? $ is a special character in the shell that is interpreted as a shell variable. Since the variable does not exist, it is textually substituted with an empty string. Try using single quotes around your parameter, like > python myapp.py '$unny-Day'.
0
1
0
0
2016-02-08T17:57:00.000
1
1.2
true
35,276,163
1
0
0
1
I'm trying to read command line arguments in python in the form: python myprogram.py string string string I have tried using sys.argv[1-3] to get each string, but when I have a string such as $unny-Day, it does not process the entire string. How can I process strings like these entirely?
When/how does Python use PYTHONPATH
35,276,882
0
2
96
0
python,pythonpath,sys.path
It always uses PYTHONPATH. What happened is probably that you quit python, but didn't quit your console/command shell. For that shell, the environment that was set when the shell was started still applies, and hence, there's no PYTHONPATH set.
0
1
0
0
2016-02-08T18:36:00.000
1
0
false
35,276,844
1
0
0
1
I'm having some trouble understanding how Python uses the PYTHONPATH environment variable. According to the documentation, the import search path (sys.path) is "Initialized from the environment variable PYTHONPATH, plus an installation-dependent default." In a Windows command box, I started Python (v.2.7.6) and printed the value of sys.path. I got a list of pathnames, the "installation-dependent default." Then I quit Python, set PYTHONPATH to .;./lib;, restarted Python, and printed os.environ['PYTHONPATH']. I got .;./lib; as expected. Then I printed sys.path. I think it should have been the installation-dependent default with .;./lib; added to the start or the end. Instead it was the installation-dependent default alone, as if PYTHONPATH were empty. What am I missing here?
Why does not os.system("cd mydir") work and we have to use os.chdir("mydir") instead in python?
35,277,184
6
5
7,672
0
python,sys
os.system (which is just a thin wrapper around the POSIX system call) runs the command in a shell launched as a child of the current process. Running a cd in that shell only changes the current directory of that process, not the parent.
0
1
0
0
2016-02-08T18:56:00.000
3
1
false
35,277,128
0
0
0
3
I tried doing a "pwd" or cwd, after the cd, it does not seem to work when we use os.system("cd"). Is there something going on with the way the child processes are created. This is all under Linux.
Why does not os.system("cd mydir") work and we have to use os.chdir("mydir") instead in python?
35,277,168
12
5
7,672
0
python,sys
os.system('cd foo') runs /bin/sh -c "cd foo" This does work: It launches a new shell, changes that shell's current working directory into foo, and then allows that shell to exit when it reaches the end of the script it was called with. However, if you want to change the directory of your current process, as opposed to the copy of /bin/sh that system() creates, you need that call to be run within that same process; hence, os.chdir().
0
1
0
0
2016-02-08T18:56:00.000
3
1
false
35,277,128
0
0
0
3
I tried doing a "pwd" or cwd, after the cd, it does not seem to work when we use os.system("cd"). Is there something going on with the way the child processes are created. This is all under Linux.
Why does not os.system("cd mydir") work and we have to use os.chdir("mydir") instead in python?
35,277,171
9
5
7,672
0
python,sys
The system call creates a new process. If you do system("cd .., you are creating a new process that then changes its own current working directory and terminates. It would be quite surprising if a child process changing its current working directory magically changed its parent's current working directory. A system where that happened would be very hard to use.
0
1
0
0
2016-02-08T18:56:00.000
3
1.2
true
35,277,128
0
0
0
3
I tried doing a "pwd" or cwd, after the cd, it does not seem to work when we use os.system("cd"). Is there something going on with the way the child processes are created. This is all under Linux.
iGraph install error with Python Anywhere
35,338,580
0
0
118
0
igraph,pythonanywhere
python-igraph installed perfectly fine in my account. My guess is that you're facing a different issue to a missing library. Perhaps a network error or something like that.
0
1
0
1
2016-02-08T19:47:00.000
1
0
false
35,278,050
0
0
0
1
I'm trying to run a web app (built with flask-wtforms and using iGraph) on Pythonanywhere. As igraph isn't part of the already inculded modules, I try and install it using the bash console, as such: pip install --user python-igraph How ever, what I get is: Could not download and compile the C core of igraph. It usually means (according to other people having the same issue on Stackoverflow) that I need to first install: sudo apt-get install -y libigraph0-dev Except, apt-get isn't available on Pythonanywhere, as far as I know. Is there any workaround to install the iGraph module for Python 2.7 on Pythonanywhere?
Why is mac OS X killing python?
35,303,065
0
1
493
0
python,macos
This seems to be fixed in TODAY's beta release: 15E39d
0
1
0
1
2016-02-09T01:13:00.000
1
0
false
35,282,336
0
0
0
1
I can no longer run python on my mac. Upgraded to mac OS X 10.11.4 Beta and now if I run python it gets killed. $python Killed: 9 the system log shows: taskgated[396]: killed pid 954 because its code signature is invalid (error -67030)
Google App Engine - Is it necessary to call self.response in handler?
35,283,754
8
3
311
0
python,google-app-engine
When your handler ends, the response goes to the client -- if you've never written anything to the response, then it will be an empty response (should come with an HTTP 204 status, but browsers are notoriously resigned to broken servers like the one you're apparently planning to create:-). Nothing about this will cause "the instance GAE creates to handle that request will stay alive so to speak indefinitely". After at most 60 seconds (for auto-scaled modules, which are the default choice), things will time out and a 500 HTTP status will go to the browser.
0
1
0
0
2016-02-09T02:29:00.000
1
1.2
true
35,282,924
0
0
1
1
I'm developing with GAE Python. If I have a URL that routes to a handler, is it necessary to actually call self.response.out.write or self.render(if I'm using a template)? I'm thinking if I don't specify a response.out call, then the instance GAE creates to handle that request will stay alive so to speak indefinitely?
Should I spawn one thread per disk writing operation?
35,283,833
0
0
37
0
multithreading,python-3.x,python-multithreading
There's no point. Disk write operations aren't blocking anyway, so there's no point in creating many threads just to perform them.
0
1
0
0
2016-02-09T04:18:00.000
1
1.2
true
35,283,799
1
0
0
1
I am making a crawler to download some images extensively and I want to speed up by using thread (I'm new to multithreading). I am not sure about the inner mechanism behind disk writing operation. Can I write different files to disk simultaneously using thread? (does the writing get scheduled automatically?) Or should I make a lock for disk access for each thread to take turns and write?
Script on a web server that establishes a lot of parallel SSH connections, which approach is better?
35,316,821
0
0
51
0
python,linux,bash,ssh,parallel-processing
For creating lots of parallel SSH connections there is already a tool called pssh. You should use that instead. But if we're really talking about 100 machines or more, you should really use a dedicated cluster management and automation tool such as Salt, Ansible, Puppet or Chef.
0
1
1
1
2016-02-10T05:52:00.000
1
0
false
35,307,829
0
0
0
1
I am writing a script in Python that establishes more than 100 parallel SSH connections, starts a script on each machine (the output is 10-50 lines), then waits for the results of the bash script and processes it. However it is run on a web server and I don't know, whether it would be better to first store the output in a file on the remote machine (that way, I suppose, I can start more remote scripts at once) and later establish another SSH connection (1 command / 1 connection) and read from those files? Now I am just reading the output but the CPU usage is really high, and I suppose the problem is, that a lot of data comes to the server at once.
How to run python script which require sudoer
35,309,692
0
0
501
0
php,python,apache
I see no security issue with giving www-data the sudo right for a single restart command without any wildcards. If you want to avoid using sudo at all, you can create a temporary file with php, and poll for this file from a shell script executed by root regularly. But this may be more error prown, and leads to the same result.
0
1
0
1
2016-02-10T07:44:00.000
2
1.2
true
35,309,463
0
0
0
2
I have a a requirement to stop/start some service using sudo service stop/start using python script. The script will be called by a webpage php code on server side running apache webserver. One way I know is to give www-data sudoer permission to run the specific python script. Is there other way without giving www-data specific permission. Example will cgi or mod_python work in this case. If yes what is the best implementation to all python script execution in LAMP server. Thanks in advance.
How to run python script which require sudoer
35,310,045
0
0
501
0
php,python,apache
You can run a python thread that listens to a stop/start request, and then this thread will stop/start the service. The thread should run as sudo, but it listens to tcp. The web server can send requests w/o any special permissions (SocketServer is a very simple out-of-the-box python tcp server). You may want to add some security, e.g. hashing the request to this server with a secret, so only allowed services will be able to request the start/stop of the service, and apply iptables rules (requests from localhost where the web server is)
0
1
0
1
2016-02-10T07:44:00.000
2
0
false
35,309,463
0
0
0
2
I have a a requirement to stop/start some service using sudo service stop/start using python script. The script will be called by a webpage php code on server side running apache webserver. One way I know is to give www-data sudoer permission to run the specific python script. Is there other way without giving www-data specific permission. Example will cgi or mod_python work in this case. If yes what is the best implementation to all python script execution in LAMP server. Thanks in advance.
Invensense Motion Driver 6.12 STM32 demo python don't work
38,325,794
-1
0
733
0
python,stm32,discovery
which type of Invensense chip you are using? I think you need to check if you use the right COM Port in windows. Check if you could get data from your MPUxxxx Board through I2C Check log_stm32.c if this function work well fputc(out[i]);
1
1
0
0
2016-02-10T11:30:00.000
2
-0.099668
false
35,313,998
0
0
0
1
I am trying to run the Invensense motion_driver_6.12. I compiled the code with IAR and the STM32 works ok - all the test I've done with the board are ok: UART, I2C.. etc. But when I run the python client demo program "eMPL-client-py" the program shows only one empty black window and nothing occurs. I tried to run first the program and then switch on the board and vice-versa. Thanks
Storing unstructured data with ramses to be searched with Ramses-API?
35,405,127
0
1
95
0
python,json,elasticsearch
This is Chrisses answer, copied from gitter.im: You can use the dict field type for "unstructured data", as it takes arbitrary json. If the db engine is postgres, it uses jsonfield under the hood, and if the db engine is mongo, it's converted to a bson document as usual. Either way it should index automatically as expected in ES and will be queryable through the Ramses API. The following ES queries are supported on documents/fields: nefertari-readthedocs-org/en/stable/making_requests.html#query-syntax-for-elasticsearch See the docs for field types here, start at the high level (ramses) and it should "just work", but you can see what the code is mapped to at each level below down to the db if desired: ramses: ramses-readthedocs-org/en/stable/fields.html nefertari (underlying web framework): nefertari-readthedocs-org/en/stable/models.html#wrapper-api nefertari-sqla (postgres-specific engine): nefertari-sqla-readthedocs-org/en/stable/fields.html nefertari-mongodb (mongo-specific engine): nefertari-mongodb-readthedocs-org/en/stable/fields.html Let us know how that works out, sounds like it could be a useful thing. So far we've just used that field type to hold data like user settings that the frontend wants to persist but for which the API isn't concerned.
0
1
0
0
2016-02-10T15:13:00.000
2
1.2
true
35,318,866
0
0
1
1
I would like to give my users the possibility to store unstructured data in JSON-Format, alongside the structured data, via an API generated with Ramses. Since the data is made available via Elasticsearch, I try to achieve that this data is indexed and searchable, too. I can't find any mentioning in the docs or searching. Would this be possible and how would one do it? Cheers /Carsten
Start Python in Background
35,335,984
1
0
219
0
python,windows,focus
Running with pythonw (or changing extension to .pyw which is the same) may help. pythonw.exe doesn't create a console window but I dunno about focus. It doesn't create any windows by default, either, so it shouldn't steal it.
0
1
0
0
2016-02-11T09:49:00.000
1
0.197375
false
35,335,781
1
0
0
1
So I have uTorrent set up to run a Python script when a torrent's state is changed so it can sort it. It all works fine except it takes focus from whatever I'm doing and it's crazy annoying. I run it using sorter.py <arguments>. I'm on Windows 10. What can I do, if anything, to get this to run in the background and not take focus from what I'm doing? I'm also sorry if this has already been answered but I couldn't find anything that worked.
Run Alembic migrations on Google App Engine
35,395,267
1
8
1,816
1
python,google-app-engine,flask,google-cloud-sql,alembic
You can whitelist the ip of your local machine for the Google Cloud SQL instance, then you run the script on your local machine.
0
1
0
0
2016-02-14T11:17:00.000
3
0.066568
false
35,391,120
0
0
1
1
I have a Flask app that uses SQLAlchemy (Flask-SQLAlchemy) and Alembic (Flask-Migrate). The app runs on Google App Engine. I want to use Google Cloud SQL. On my machine, I run python manage.py db upgrade to run my migrations against my local database. Since GAE does not allow arbitrary shell commands to be run, how do I run the migrations on it?
Using Tkinter with Openshift
35,442,172
0
0
105
0
python,openshift
Bryan has answered the question. Tkinter will not work with WSGI. A web framework such as Django must be used.
1
1
0
0
2016-02-14T19:01:00.000
1
1.2
true
35,395,843
0
0
0
1
I would like to deploy a Python3 app that uses tkinter on OpenShift. I added the following to setup.py: install_requires=["Tcl==8.6.4"]. When I ran git push I received the following error: Could not find suitable distribution for Requirement.parse('Tcl==8.6.4'). Can anyone provide the correct syntax, distribution package name and version?
GDB cross-compilation for arm
35,412,607
0
2
1,845
0
python,c++,linux,gdb,arm
You are probably missing library headers (something like python3-dev). To install it on Ubuntu or similar start by sudo apt-get install python3-dev. Or if you don't plan to use python scripting in gdb, you can configure with "--without-python". As far as I can tell you are also not configuring gdb correctly. You can leave out --build (if you are building on PC arm-none-linux... is wrong). Your host should be arm-none-linux-gnueabi, not just arm.
0
1
0
0
2016-02-15T10:56:00.000
1
0
false
35,407,514
1
0
0
1
I want to debug application on devices , i prefer to use gdb(ARM version) than gdb with gdbserver to debug, because there is a dashboard , a visual interface for GDB in Python. It must cooperation with gdb(ARM version) on devices,so i need to cross compiling a ARM version of gdb with python, the command used shows below: ./configure --build=arm-none-linux-gnueabi --host=arm -target=arm-none-linux-gnueabi CC=arm-none-linux-gnueabi-gcc --with-python=python3.3 --libdir=/u01/rootfs/lib/ --prefix=/u01/cross-compilation/gdb-7.7.1/arm_install --disable-libtool --disable-tui --with-termcap=no --with-curses=no But finally a error message appeared during make: checking for python3.3... missing configure: error: unable to find python program python3.3 Here I had a python3.3 binaries and libraries whicha are cross-compiled for ARM. Please give me any suggestion. Thanks in advance....
Google App Engine Console shows more entities than I created
35,578,109
1
0
70
0
python,google-app-engine,google-cloud-datastore,google-console-developer
If you check the little question-mark near the statistics summary it says the following: Statistics are generated every 24-48 hours. In addition to the kinds used for your entities, statistics also include system kinds, such as those related to metadata. System kinds are not visible in your kinds menu. Note that statistics may not be available for very large datasets. Could be any of these.
0
1
0
0
2016-02-16T02:14:00.000
1
0.197375
false
35,422,495
0
0
1
1
I recently deployed my app on GAE. In my Datastore page of Google Cloud Console, in the Dashboard Summary, it shows that I have 75 entities. However, when I click on the Entities tab, it shows I have 2 entities of one kind and 3 entities of another kind. I remember creating these entities. I'm just curious where the 75 entities comes from? Just checking if I'm doing something wrong here.
Create regular backups from OS X to the cloud
35,442,348
4
5
35
0
python,django,macos
This is what version control is for. Sign up for an account at Github, Bitbucket, or Gitlab, and push your code there.
0
1
0
0
2016-02-16T18:43:00.000
1
1.2
true
35,440,612
0
0
1
1
I'm developing a Django project on my MacBook Pro. Constantly paranoid that if my house burns down, someone stoling my MB, hard drive failure or another things that are not likely, but catastrophic if it occurs. How can I create or get automatic backup every 1 hour from my OS X directory where the Django project is to a service like Dropbox or whatever cloud hosting company there might be a solution for. Is there a Python script that does this? I can't be the only one that has thought of this before.
Running processes in OS X, Find the initiator process
35,452,078
2
2
81
0
python,c,macos,core-foundation,mach
You can't. Mac OS X does not keep track of this information in the way you're looking for -- opening an application from another application does not establish a relationship of any sort between those applications.
0
1
0
0
2016-02-17T08:37:00.000
1
1.2
true
35,451,564
0
0
0
1
I'd like to create a daemon (base on script or some lower level language) that calculates statistics on all opened applications according to their initiating process. The problem is that the initiating process does not always equivalent to the actual parent process. For instance, When I press an hyperlink from Microsoft Word that should open executable file like file:///Applications/Chess.app/ In the case above, I've observed that the ppid of 'Chess' is in fact 'launchd', just the same as if I was running it from launchpad. Perhaps there's a mach_port (or any other) api to figure out who really initiated the application ?
Cuckoo sandbox: shows "Configuration details about machine windows_7 are missing" error
35,478,371
1
2
1,540
0
python,testing,analysis,cuckoo
I was able to fix this issue just by changing the configuration file "virtualbox.conf". in this configuration file it says that the virtual machine as [cuckoo1] (title of the virtual machine configuration). Since my virtual machine name is "windows_7" i have to change [cuckoo1] to windows_7. That is why cuckoo don't get the virtual machine configuration (because configurations by default is set for [cuckoo1] virtual machine name).
0
1
0
0
2016-02-17T11:08:00.000
1
1.2
true
35,454,970
1
0
0
1
I have installed cuckoo sandbox in ubuntu environment with windows7 32 bit as guest os. I have followed the instructions given in their website.The vm is named windows_7. I have edited the "machine" and "label" field properly in "virtualbox.conf". But when I try to start the cuckoo executing "sudo python cuckoo.py" it gives me an error : "WARNING: Configuration details about machine windows_7 are missing: Option windows_7 is not found in configuration, error: Config instance has no attribute 'windows_7' CRITICAL: CuckooCriticalError: No machines available.".
Wing IDE not stopping at break point for Google App Engine
35,457,609
2
2
544
0
python,google-app-engine,debugging,breakpoints
As often happens with these things, writing this question gave me a couple of ideas to try. I was using the Personal edition ... so I downloaded the professional edition ... and it all worked fine. Looks like I'm paying $95 instead of $45 when the 30 day trial runs out.
0
1
0
0
2016-02-17T12:52:00.000
2
0.197375
false
35,457,300
0
0
1
2
I'm new to Python, Wing IDE and Google cloud apps. I've been trying to get Wing IDE to stop at a breakpoint on the local (Windows 7) Google App Engine. I'm using the canned guestbook demo app and it launches fine and responds as expected in the web browser. However breakpoints are not working. I'm not sure if this is important but I see the following status message when first starting the debugger: Debugger: Debug process running; pid=xxxx; Not listening (too many connections) ... My run arguments are as per the recommendation in the Wing IDE help file section "Using Wing IDE with Google App Engine", namely: C:\x\guestbook --max_module_instances=1 --threadsafe_override=false One problem I found when trying to follow these instructions. The instructions say go into Project Properties and the Debug/Execute tab and set the Debug Child Processes to Always Debug Child Process. I found this option doesn't exist. Note also that in the guestbook app, if I press the pause button, the code breaks, usually in the python threading.py file in the wait method (which makes sense). Further note also that if I create a generic console app in Wing IDE, breakpoints work fine. I'm running 5.1.9-1 of Wing IDE Personal. I've included the Google appengine directory and the guestbook directories in the python path. Perhaps unrelated but I also find that sys.stdout.write strings are not appearing in the Debug I/O window.
Wing IDE not stopping at break point for Google App Engine
42,961,127
5
2
544
0
python,google-app-engine,debugging,breakpoints
I have copied the wingdbstub.py file (from debugger packages of Wing ide) to the folder I am currently running my project on and used 'import wingdbstub' & initiated the debug process. All went well, I can now debug modules.
0
1
0
0
2016-02-17T12:52:00.000
2
0.462117
false
35,457,300
0
0
1
2
I'm new to Python, Wing IDE and Google cloud apps. I've been trying to get Wing IDE to stop at a breakpoint on the local (Windows 7) Google App Engine. I'm using the canned guestbook demo app and it launches fine and responds as expected in the web browser. However breakpoints are not working. I'm not sure if this is important but I see the following status message when first starting the debugger: Debugger: Debug process running; pid=xxxx; Not listening (too many connections) ... My run arguments are as per the recommendation in the Wing IDE help file section "Using Wing IDE with Google App Engine", namely: C:\x\guestbook --max_module_instances=1 --threadsafe_override=false One problem I found when trying to follow these instructions. The instructions say go into Project Properties and the Debug/Execute tab and set the Debug Child Processes to Always Debug Child Process. I found this option doesn't exist. Note also that in the guestbook app, if I press the pause button, the code breaks, usually in the python threading.py file in the wait method (which makes sense). Further note also that if I create a generic console app in Wing IDE, breakpoints work fine. I'm running 5.1.9-1 of Wing IDE Personal. I've included the Google appengine directory and the guestbook directories in the python path. Perhaps unrelated but I also find that sys.stdout.write strings are not appearing in the Debug I/O window.
Python requests vs java webservices
35,458,539
0
0
254
0
java,python,web-services,python-requests,legacy
Maybe you could add a man-in-the-middle. A socket server who gets the unix strings, parse them into a sys-2 type of message and send it to sys-2. That could be an option to not re-write all calls between the two systems.
0
1
0
0
2016-02-17T13:03:00.000
2
0
false
35,457,531
0
0
1
2
I have a legacy web application sys-1 written in cgi that currently uses a TCP socket connection to communicated with another system sys-2. Sys-1 sends out the data in the form a unix string. Now sys-2 is upgrading to java web service which in turn requires us to upgrade. Is there any way to upgrade involving minimal changes to the existing legacy code. I am contemplating the creating of a code block which gets the output of Sys-1 and changes it into a format required by Sys-2 and vice versa. While researching, I found two ways of doing this: By using the "requests" library in python. Go with the java webservices. I am new to Java web services and have some knowledge in python. Can anyone advise if this method works and which is a better way to opt from a performance and maintenance point of view? Any new suggestions are also welcome!
Python requests vs java webservices
35,458,442
0
0
254
0
java,python,web-services,python-requests,legacy
Is there any way to upgrade involving minimal changes to the existing legacy code. The solution mentioned, adding a conversion layer outside of the application, would have the least impact on the existing code base (in that it does not change the existing code base). Can anyone advise if this method works Would writing a Legacy-System-2 to Modern-System-2 converter work? Yes. You could write this in any language you feel comfortable in. Web Services are Web Services, it matters not what they are implemented in. Same with TCP sockets. better way to opt from a performance How important is performance? If this is used once in a blue moon then who cares. Adding a box between services will make the communication between services slower. If implemented well and running close to either System 1 or System 2 likely not much slower. maintenance point of view? Adding additional infrastructure adds complexity thus more problems with maintenance. It also adds a new chunk of code to maintain, and if System 1 needs to use System 2 in a new way you have two lots of code to maintain (Legacy System 1 and Legacy/Modern converter). Any new suggestions are also welcome! How bad is legacy? Could you rip the System-1-to-System-2 code out into some nice interfaces that you could update to use Modern System 2 without too much pain? Long term this would have a lower overall cost, but would have a (potentially significantly) larger upfront cost. So you have to make a decision on what, for your organisation, is more important. Time To Market or Long Term Maintenance. No one but your organisation can answer that.
0
1
0
0
2016-02-17T13:03:00.000
2
1.2
true
35,457,531
0
0
1
2
I have a legacy web application sys-1 written in cgi that currently uses a TCP socket connection to communicated with another system sys-2. Sys-1 sends out the data in the form a unix string. Now sys-2 is upgrading to java web service which in turn requires us to upgrade. Is there any way to upgrade involving minimal changes to the existing legacy code. I am contemplating the creating of a code block which gets the output of Sys-1 and changes it into a format required by Sys-2 and vice versa. While researching, I found two ways of doing this: By using the "requests" library in python. Go with the java webservices. I am new to Java web services and have some knowledge in python. Can anyone advise if this method works and which is a better way to opt from a performance and maintenance point of view? Any new suggestions are also welcome!
Qt Creator failed to start gdb in latest openSUSE
35,525,248
0
1
116
0
python,debugging,gdb,qt-creator,opensuse
Works fine if "Run in terminal" unchecked or terminal changed back from konsole to xterm (works in konsole previously - weird).
0
1
0
0
2016-02-17T15:09:00.000
1
1.2
true
35,460,433
0
0
0
1
See that when trying to debug my program in Qt Creator "Application Output" pane: Debugging starts Debugging has failed Debugging has finished Or freezes after Debugging starts Was able to run previously. Any way to fix this or to discover the problem? Qt Creator 3.5.1, gcc 4.8.5, gdb 7.9.1, Python 2.7.9 P.S. Hmm, works fine if "Run in terminal" unchecked or terminal changed back from konsole to xterm (works in konsole previously - weird).
How to find the name of a process spawned by Python?
35,463,485
0
2
155
0
python,linux,process
Would running a filter in htop be quick enough? Run htop, Press F5 to enter tree mode, then F4 to filter, and type in python... it should show all the python processes as they open/close
0
1
0
1
2016-02-17T16:59:00.000
1
0
false
35,463,019
0
0
0
1
I am on Linux and wish to find the process spawned by a Python command. Example: shutil.copyfile. How do I do so? Generally I have just read the processes from the terminal with ps however this command completes nearly instantaneously so I cannot do that for this without some lucky timing. htop doesn't show the info, strace seems to show a lot of info but I can't seem to get the process in it.
Intelhex - import error - macOSX Terminal
35,464,530
0
0
1,912
0
python,macos,import,terminal
That error means that there is no 'intelhex' on your Python path. The contents of /usr/local/bin should not matter (those are executable files but are not the Python modules). Are you sure that you installed the package and are loading it from the same Python site packages location you installed it to?
0
1
0
0
2016-02-17T17:51:00.000
1
1.2
true
35,464,113
1
0
0
1
I am using the terminal on a MacBook Pro. Trying to use intelhex in my code. I have downloaded intelhex using sudo pip install intelhex Success pip list shows intelhex installed run my code and receive this error: Traceback (most recent call last): File "./myCode.py", line 20, in from intelhex import IntelHex ImportError: No module named 'intelhex' I am using Python 2.7.11 ls /usr/local/bin shows the contents of intelhex: hex2bin.py bin2hex.py hexmerge.py hexdiff.py Where am I going wrong?!
RethinkDB clients connection failover between proxies
35,513,048
0
0
102
0
python,client,rethinkdb,failover,rethinkdb-python
Below is my opinion on how I setup thing. When the local proxy crashes, they should restart by using a process monitor like systemd. I don't use RethinkDB local proxy. I used HAProxy runs in TCP mode locally on every app server, to forward to RethinkDB. I used Consul Template so that when a RethinkDB node join cluster, HAProxy configuration will be updated and add the node and restart on its own. HAProxy is very lighweight and rock solid for me. Not just RethinkDB, HAProxy runs locally and do all kind of proxying request, even MySQL/Redis... HAProxy has all kind of routing/failover scenrario, like backup backend,...
0
1
0
0
2016-02-18T22:00:00.000
1
0
false
35,493,291
0
0
0
1
I have: 4 servers running a single RethinkDB instance in cluster (4 shards / 3 replicas tables) 2 application servers (tornado + RethinkDB proxy) The clients connect only to their local proxy. How to specify both the local + the other proxy so that the clients could fail over to the other proxies when their local proxy crashes or experiences issues?
Diagnosing memory leak from Windows DLL accessed in Python with ctypes
35,613,787
0
0
217
0
python,dll,memory-leaks,ctypes,dllexport
I ended up writing a program in C without dynamic memory allocation to test the library. The leak is indeed in one of the functions I'm calling, not the Python program.
0
1
0
0
2016-02-19T01:07:00.000
1
1.2
true
35,495,530
0
0
0
1
I've written an abstraction layer in Python for a piece of commercial software that has an API used for accessing the database back end. The API is exposed via a Windows DLL, and my library is written in Python. My Python package loads the necessary libraries provided by the application, initializes them, and creates a couple of Python APIs on top. There are low level functions that simply wrap the API, and make the functions callable from Python, as well as a higher level interface that makes interaction with the native API more efficient. The problem I'm encountering is that when running a daemon that uses the library, it seems there is a memory leak. (Several hundred KB/s) I've used several Python memory profiling tools, as well as tested each function individually, and only one function seems to leak, yet no tool reports that memory has been lost during execution of that function. On Linux, I would use Valgrind to figure out if the vendor's library was the culprit, but the application only runs on Windows. How can I diagnose whether the vendor is at fault, or if it's the way I'm accessing their library?
Using Microsoft Azure to run "a bunch of Python scripts on demand"
35,508,696
0
0
634
0
python,azure,queue
One possible strategy could be to use Webjobs. Webjobs can execute Python scripts and run on a schedule. Let's say that you run a Webjob every 5 minutes, the Python script can pool the queue, do some processing and post the results back to you API.
0
1
0
0
2016-02-19T14:32:00.000
2
0
false
35,507,732
1
0
0
1
I'm trying to define an architecture where multiple Python scripts need to be run in parallel and on demand. Imagine the following setup: script requestors (web API) -> Service Bus queue -> script execution -> result posted back to script requestor To this end, the script requestor places a script request message on the queue, together with an API endpoint where the result should be posted back to. The script request message also contains the input for the script to be run. The Service Bus queue decouples producers and consumers. A generic set of "workers" simply look for new messages on the queue, take the input message and call a Python script with said input. Then they post back the result to the API endpoint. But what strategies could I use to "run the Python scripts"?
How to create setup/installer for my Python project which has dependencies?
35,509,182
1
3
137
0
python,linux
Look at setuptools distutils These are classical tools for python packaging
0
1
0
0
2016-02-19T15:35:00.000
1
0.197375
false
35,509,019
1
0
0
1
I have created a simple software with GUI. It has several source files. I can run the project in my editor. I think it is ready for the 1.0 release. But I don't know how to create a setup/installer for my software. The source is in python. Environment is Linux(Ubuntu). I used an external library which does not come with standard Python library. How can I create the installer, so that I just distribute the source code in the tar file. And the user installs the software on his machine(Linux) by running a setup/installer file? Please note: When the setup is run, it should automatically take care of the dependencies.(Also, I don't want to build an executable for distribution.) Something similar to what happens when I type: sudo apt-get install XXXX