Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
Celery - Single AMQP queue bound to multiple exchanges
40,389,543
0
0
177
0
python,django,rabbitmq,celery
CELERY_QUEUES is used only for "internal" celery communication with it's workers, not with your custom queues in rabbitmq independent of celery. What are you trying to accomplish with two exchanges with the same queue?
0
1
0
0
2016-09-13T10:08:00.000
1
0
false
39,467,399
0
0
1
1
I have a RabbitMQ topology(set up independent of celery) with a queue that is bound to two exchanges with the same routing key. Now, I want to set up a celery instance to post to the exchanges and another one to consume from the queue. I have the following questions in the context of both the producer and the consumer: Is the CELERY_QUEUES setting necessary in the first place if I specify only the exchange name and routing key in apply_async and the queue name while starting up the consumer? From my understanding of AMQP, this should be enough... If it is necessary, I can only set one exchange per queue there. Does this mean that the other binding will not work(producer can't post to the other exchange, consumer can't receive messages routed through the other exchange)? Or, can I post and receive messages from the other exchange regardless of the binding in CELERY_QUEUES?
Restrict python exec acess to one directory
39,473,520
2
0
1,159
0
python,python-2.7,python-exec
Execute the code as a user that only owns that specific directory and has no permissions anywhere else? However- if you do not completely trust the source of code, you should simply not be using exec under any circumstances. Remember, say you came up with a python solution... the exec code could literally undo whatever restrictions you put on it before doing its nefarious deeds. If you tell us the problem you're trying to solve, we can probably come up with a better idea.
0
1
0
0
2016-09-13T15:11:00.000
2
0.197375
false
39,473,445
0
0
0
2
I have a python script which executes a string of code with the exec function. I need a way to restrict the read/write access of the script to the current directory. How can I achieve this? Or, is there a way to restrict the python script's environment directly through the command line so that when I run the interpreter, it does not allow writes out of the directory? Can I do that using a virtualenv? How? So basically, my app is a web portal where people can write and execute python apps and get a response - and I've hosted this on heroku. Now there might be multiple users with multiple folders and no user should have access to other's folders or even system files and folders. The permissions should be determined by the user on the nodejs app (a web app) and not a local user. How do I achieve that?
Restrict python exec acess to one directory
39,474,240
2
0
1,159
0
python,python-2.7,python-exec
The question boils down to: How can I safely execute the code I don't trust. You can't. Either you know what the code does or you don't execute it. You can have an isolated environment for your process, for example with docker. But the use cases are far away from executing unsafe code.
0
1
0
0
2016-09-13T15:11:00.000
2
0.197375
false
39,473,445
0
0
0
2
I have a python script which executes a string of code with the exec function. I need a way to restrict the read/write access of the script to the current directory. How can I achieve this? Or, is there a way to restrict the python script's environment directly through the command line so that when I run the interpreter, it does not allow writes out of the directory? Can I do that using a virtualenv? How? So basically, my app is a web portal where people can write and execute python apps and get a response - and I've hosted this on heroku. Now there might be multiple users with multiple folders and no user should have access to other's folders or even system files and folders. The permissions should be determined by the user on the nodejs app (a web app) and not a local user. How do I achieve that?
Error "mach-o, but wrong architecture" after installing anaconda on mac
39,477,667
1
1
7,799
0
python,macos,python-2.7
you are mixing 32bit and 64bit versions of python. probably you installed 64bit python version on a 32bit computer. go on and uninstall python and reinstall it with the right configuration.
0
1
0
0
2016-09-13T18:47:00.000
4
0.049958
false
39,477,023
1
1
0
2
I am getting an architecture error while importing any package, i understand my Python might not be compatible, can't understand it. Current Python Version - 2.7.10 `MyMachine:desktop *********$ python pythonmath.py Traceback (most recent call last): File "pythonmath.py", line 1, in import math ImportError: dlopen(/Users/*********/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find: /Users/**********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture MyMachine:desktop ***********$ python pythonmath.py Traceback (most recent call last): File "pythonmath.py", line 1, in import math ImportError: dlopen(/Users/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find: /Users/***********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture
Error "mach-o, but wrong architecture" after installing anaconda on mac
70,210,511
3
1
7,799
0
python,macos,python-2.7
Below steps resolved this problem for me. Quit the terminal. Go to Finder => Apps Right Click on Terminal Get Info Check the checkbox Open using Rosetta Now, open the terminal and try again. PS: Rosetta allows Mac with M1 architecture to use apps built for Mac with Intel chip. Most of the times the reason behind most of the architecture problems is this chip compatibility reason only. So, 'Open using Rosetta' for terminal allows us to use Rosetta by default for such applications.
0
1
0
0
2016-09-13T18:47:00.000
4
0.148885
false
39,477,023
1
1
0
2
I am getting an architecture error while importing any package, i understand my Python might not be compatible, can't understand it. Current Python Version - 2.7.10 `MyMachine:desktop *********$ python pythonmath.py Traceback (most recent call last): File "pythonmath.py", line 1, in import math ImportError: dlopen(/Users/*********/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find: /Users/**********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture MyMachine:desktop ***********$ python pythonmath.py Traceback (most recent call last): File "pythonmath.py", line 1, in import math ImportError: dlopen(/Users/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find: /Users/***********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture
Grouping similar files in folders
39,477,654
0
0
220
0
python,bash,shell,scripting
Try this code. I also put the fake file generator for testing purposes. Prudential step: do rm only after you check everything is ok. Possible improvement move/rename files instead of copying and the removing for aaa in {1..5} ; do touch SUBJA_${aaa}.txt SUBJB_${aaa}.txt SUBJC_${aaa}.txt; done for MYSUBJ in SUBJA SUBJB SUBJC do mkdir $MYSUBJ cp $MYSUBJ*.txt $MYSUBJ/ rm $MYSUBJ*.txt done
0
1
0
0
2016-09-13T19:08:00.000
2
0
false
39,477,312
0
0
0
1
I would like to group similar files in folders in a same directory. TO give a better idea, i am working on image datasets where I have images of several subjects with varying filenames. however I will have 10-15 images per subject in the dataset too. So lets say subject A will have 10 images named as A_1.png, A_2.png, A_3.png and so on. So, similarly we have n number of subjects. I have to group the subjects in folders having all the images corresponding to that subject. I tried using python but it I was not able to get to the point right. Can we do it using bash or shell scripts? If yes, please advise.
What's the difference between setup.py and setup.cfg in python projects
46,090,408
13
81
24,854
0
python,python-2.7,openstack,setuptools
setup.py is an integral part of a python package which includes details or information about the files that should be a package. This includes the required dependencies for installation and functioning of your Python package, entry points, license, etc. setup.cfg on the other hand is more about the settings for any plug-ins or the type of distribution you wish to create. bdist/sdist and further classification of universal or core-python wheel. It can also be used to configure some meta-data of the setup.py.
0
1
0
0
2016-09-14T07:33:00.000
3
1
false
39,484,863
1
0
0
1
Need to know what's the difference between setup.py and setup.cfg. Both are used prominently in openstack projects
how can sync clients connect to twisted server
39,492,575
2
1
89
0
python,twisted
Clients do not have to be written w/ twisted (they don't even have to be written in Python); they just have to use a protocol that your server supports.
0
1
0
0
2016-09-14T14:05:00.000
1
1.2
true
39,492,507
0
0
0
1
I'm new to twisted. I was wondering if I can use multiple sync clients to connect to a twisted server? Or I have to make the client twisted as well? Thanks in advance.
Opening Pycharm from terminal with the current path
39,500,506
17
13
33,524
0
python,bash,command-line,pycharm
PyCharm can be launched using the charm command line tool (which can be installed while getting started with PyCharm the first time). charm .
0
1
0
0
2016-09-14T22:12:00.000
6
1.2
true
39,500,438
1
0
0
1
If you give in the command "atom ." in the terminal, the Atom editor opens the current folder and I am ready to code. I am trying to achieve the same with Pycharm using Ubuntu: get the current directory and open it with Pycharm as a project. Is there way to achieve this by setting a bash alias?
Error on caravel source code build - no previously included files matching *.pyo
39,660,578
0
0
508
0
python,python-3.x,numpy
Well.. I missed on installing some numpy libraries. sudo apt-get install python-numpy sudo apt-get install libsasl2-dev python-dev libldap2-dev libssl-dev After installing the above pakages my issue got resolved. Caravel is running fine.
0
1
0
0
2016-09-15T15:21:00.000
1
1.2
true
39,514,804
0
0
0
1
I am trying to build the source code of caravel. Following the instructions I have installed the front end dependencies using npm. on python setup.py install I am getting error: warning: no previously-included files matching '.pyo' found anywhere in distribution warning: no previously-included files matching '.pyd' found anywhere in distribution numpy/core/src/npymath/ieee754.c.src:7:29: fatal error: npy_math_common.h: No such file or directory #include "npy_math_common.h" I tried running with python3. I am running this on Ubuntu 14.04.4 LTS
Configurate Spark by given Cluster
39,530,209
0
0
35
0
java,python,scala,apache-spark,pyspark
your question is unclear. If the data are on your local machine, you should first copy your data to the cluster on HDFS filesystem. Spark can works in 3 modes with YARN (are u using YARN or MESOS ?): cluster, client and standalone. What you are looking for is client-mode or cluster mode. But if you want to start the application from your local machine, use client-mode. If you have an SSH access, you are free to use both. The simplest way is to copy your code directly on the cluster if it is properly configured then start the application with the ./spark-submit script, providing the class to use as an argument. It works with python script and java/scala classes (I only use python so I don't really know)
0
1
0
0
2016-09-16T06:41:00.000
2
0
false
39,525,214
0
1
0
1
I have to send some applications in python to a Apache Spark cluster. There is given a Clustermanager and some worker nodes with the addresses to send the Application to. My question is, how to setup and to configure Spark on my local computer to send those requests with the data to be worked out to the cluster? I am working on Ubuntu 16.xx and already installed java and scala. I have searched the inet but the most find is how to build the cluster or some old advices how to do it, which are out of date.
share variables by PHP, python, bash
39,580,897
0
1
46
0
php,python,bash,variables,share
OK, I think the best approach for me here would be to limit variable storages from 3 to at least 2 and make python script deal with bash tasks over os.system. To use 2 storages is somehow manageable
0
1
0
1
2016-09-19T12:48:00.000
1
0
false
39,573,614
1
0
0
1
I have project that uses same initial variables on same server by different programming languages. they are PHP, python and bash. i need all languages to access those variable and I cannot exclude any language. for now I keep 3 places to store variables: for php I have Mysql storage, for python and bash 2 separate files if initial value of any variable changes, i need to change it at 3 locations i want to simplify that now. lest assume all systems can access Mysql. is there the way define initial variables in Mysql instead of files? or what is the best practice to share variables in my case?
Not able to convert cassandra blob/bytes string to integer
39,628,303
0
0
1,608
1
python,cassandra,cqlsh,cqlengine
Blob will be converted to a byte array in Python if you read it directly. That looks like a byte array containing the Hex value of the blob. One way is to explicitly do the conversion in your query. select id, name, blobasint(value) from table limit 3 There should be a conversion method with the Python driver as well.
0
1
0
0
2016-09-21T09:02:00.000
1
0
false
39,611,995
0
0
0
1
I have a column-family/table in cassandra-3.0.6 which has a column named "value" which is defined as a blob data type. CQLSH query select * from table limit 2; returns me: id | name | value id_001 | john | 0x010000000000000000 id_002 | terry | 0x044097a80000000000 If I read this value using cqlengine(Datastax Python Driver), I get the output something like: {'id':'id_001', 'name':'john', 'value': '\x01\x00\x00\x00\x00\x00\x00\x00\x00'} {'id':'id_002', 'name':'terry', 'value': '\x04@\x97\xa8\x00\x00\x00\x00\x00'} Ideally the values in the "value" field are 0 and 1514 for row1 and row2 resp. However, I am not sure how I can convert the "value" field values extracted using cqlengine to 0 and 1514. I tried few methods like ord(), decode(), etc but nothing worked. :( Questions: What is this format? '\x01\x00\x00\x00\x00\x00\x00\x00\x00' or '\x04@\x97\xa8\x00\x00\x00\x00\x00'? How I can convert these arbitrary values to 0 and 1514? NOTE: I am using python 2.7.9 on Linux Any help or pointers would be useful. Thanks,
Google App Engine Memcache Python
39,623,880
0
1
71
0
python,google-app-engine,memcached
Memcache is shared across users. It is not a cookie, but exists in RAM on the server for all pertinent requests to access.
0
1
0
0
2016-09-21T11:54:00.000
2
0
false
39,615,861
0
0
1
1
Using Google App Engine Memcache... Can more than one user access the same key-value pair? or in other words.. Is there a Memcache created per user or is it shared across multiple users?
How to install psycopg2 for django on mac
51,605,060
2
2
1,746
0
python,django,postgresql,psycopg2
Instead of pip install psycopg2 try pip install psycopg2-binary
0
1
0
0
2016-09-22T15:21:00.000
4
0.099668
false
39,642,961
1
0
0
1
I have postgresql installed (with postgresql app). When I try "pip install psycopg2", i get "unable to execute gcc-4.2: No such file or directory. How to fix?
How to know which .whl module is suitable for my system with so many?
39,652,742
0
0
1,458
0
python,python-wheel,python-install
You don't have to know. Use pip - it will select the most specific wheel available.
0
1
0
0
2016-09-23T04:16:00.000
2
0
false
39,652,553
0
1
0
1
We have so may versions of wheel. How could we know which version should be installed into my system? I remember there is a certain command which could check my system environment. Or is there any other ways? ---------------------Example Below this line ----------- scikit_learn-0.17.1-cp27-cp27m-win32.whl scikit_learn-0.17.1-cp27-cp27m-win_amd64.whl scikit_learn-0.17.1-cp34-cp34m-win32.whl scikit_learn-0.17.1-cp34-cp34m-win_amd64.whl scikit_learn-0.17.1-cp35-cp35m-win32.whl scikit_learn-0.17.1-cp35-cp35m-win_amd64.whl scikit_learn-0.18rc2-cp27-cp27m-win32.whl scikit_learn-0.18rc2-cp27-cp27m-win_amd64.whl scikit_learn-0.18rc2-cp34-cp34m-win32.whl scikit_learn-0.18rc2-cp34-cp34m-win_amd64.whl scikit_learn-0.18rc2-cp35-cp35m-win32.whl scikit_learn-0.18rc2-cp35-cp35m-win_amd64.whl
How can I add a python script to the windows system path?
39,663,773
-1
4
5,762
0
python,python-2.7,cmd
1.Go to Environmental Variables > system variable > Path > Edit 2.It look like this Path C:\Program Files\Java\jdk1.8.0\bin;%SystemRoot%\system32;C:\Program Files\nodejs\; 3.You can add semicolon (;) at the end and add C:\Python27 4.After adding it look like this C:\Program Files\Java\jdk1.8.0\bin;%SystemRoot%\system32;C:\Program Files\nodejs\;C:\Python27;
0
1
0
0
2016-09-23T14:18:00.000
3
-0.066568
false
39,663,091
0
0
0
2
I'm using windows cmd to run my python script. I want to run my python script withouth to give the cd command and the directory path. I would like to type only the name of the python script and run it. I'm using python 2.7
How can I add a python script to the windows system path?
69,844,640
1
4
5,762
0
python,python-2.7,cmd
Make sure .py files are associated with the Python launcher C:\Windows\py.exe or directly with e.g. 'C:\Python27\python.exethen edit yourPATHEXTenvironment variable using (System Properties) to add;.PY` at the end. You can now launch Python files in the current directory by typing their name. To be able to launch a given Python script from any directory, you can either put it in a directory that's already on the PATH, or add a new directory to PATH (I like creating a bin directory in my user folder and adding %USERPROFILE%\bin to PATH) and put it there. Note that this is more a "how do I use Windows" question rather than a Python question.
0
1
0
0
2016-09-23T14:18:00.000
3
0.066568
false
39,663,091
0
0
0
2
I'm using windows cmd to run my python script. I want to run my python script withouth to give the cd command and the directory path. I would like to type only the name of the python script and run it. I'm using python 2.7
Cross-compiling greenlet for linux arm target
39,798,938
0
1
141
0
python,arm
Build gevent with dependencies on QEMU raspberry pi.
0
1
0
0
2016-09-23T21:58:00.000
1
0
false
39,670,135
0
0
0
1
I want to build greenlet to use on arm32 linux box. I have an ubuntu machine, with my gcc cross-compiler for the arm target. How do I build greenlet for my target from my ubuntu machine?
Subprocess Isolation in Python
39,681,690
1
1
324
0
python,subprocess,chroot,isolation
I don't know if you have an objection to using a 3'rd party communication library for your task but this sounds like what ZeroMQ would be used for.
0
1
0
0
2016-09-24T22:45:00.000
1
0.197375
false
39,681,631
1
0
0
1
I am currently working on a personal project where I have to run two processes simultaneously. The problem is that I have to isolate each of them (they cannot communicate between them or with my system) and I must be able to control their stdin, stdout and stderr. Is there anyway I can achieve this? Thank you!
Sublime Text 3 subl command not working in Windows 10
66,841,135
0
3
7,531
0
python,cmd,command-line,sublimetext3,command-prompt
After adding the path variable, restart your PC, worked like a charm.
0
1
0
0
2016-09-25T20:31:00.000
5
0
false
39,691,547
0
0
0
3
when I run the subl command, it just pauses for a moment and doesn't give me any feedback as to what happened and doesn't open. I am currently on windows 10 running the latest sublime text 3 build. I already copied my subl.exe from my sublime text 3 directory to my system32 directory. What am I missing? I've tried subl.exe ., subl.exe detect.py, subl, subl.exe Please help me with this setup
Sublime Text 3 subl command not working in Windows 10
60,349,655
1
3
7,531
0
python,cmd,command-line,sublimetext3,command-prompt
You can add gitbash alias like below open a gitbash terminal and type alias subl="/c/Program\ Files/Sublime\ Text\ 3/subl.exe" then try subl . from gitbash you can also add permanent alias for git bash like below Go to: C:\Users\ [youruserdirectory] \ make a .bash_profile file open it with text editor add the alias . alias subl="/c/Program\ Files/Sublime\ Text\ 3/subl.exe"
0
1
0
0
2016-09-25T20:31:00.000
5
0.039979
false
39,691,547
0
0
0
3
when I run the subl command, it just pauses for a moment and doesn't give me any feedback as to what happened and doesn't open. I am currently on windows 10 running the latest sublime text 3 build. I already copied my subl.exe from my sublime text 3 directory to my system32 directory. What am I missing? I've tried subl.exe ., subl.exe detect.py, subl, subl.exe Please help me with this setup
Sublime Text 3 subl command not working in Windows 10
63,048,909
1
3
7,531
0
python,cmd,command-line,sublimetext3,command-prompt
After adding a path environmental variable, you have just to type subl.exe in command prompt
0
1
0
0
2016-09-25T20:31:00.000
5
0.039979
false
39,691,547
0
0
0
3
when I run the subl command, it just pauses for a moment and doesn't give me any feedback as to what happened and doesn't open. I am currently on windows 10 running the latest sublime text 3 build. I already copied my subl.exe from my sublime text 3 directory to my system32 directory. What am I missing? I've tried subl.exe ., subl.exe detect.py, subl, subl.exe Please help me with this setup
Airflow unable to execute all the dependent tasks in one go from UI
39,801,249
0
1
157
0
python,airflow
could it be that you just need to restart the webserver and the scheduler? It happens when you change your code, like adding new tasks. Please post more details and some code.
0
1
0
0
2016-09-28T12:32:00.000
1
0
false
39,747,645
0
0
0
1
My DAG has 3 tasks and we are using Celery executor as we have to trigger individual tasks from UI.We are able to execute the individual task from UI. The problem which we are facing currently, is that we are unable to execute all the steps of DAG from UI in one go, although we have set the task dependencies. We are able to execute the complete DAG from command line but is there any way to execute the same via UI ?
Automatically running app .py in in Heroku
39,754,555
0
0
43
0
python,django,git,heroku
Not sure but try: heroku run --app cghelper python bot.py &
0
1
0
1
2016-09-28T16:45:00.000
1
0
false
39,753,285
0
0
1
1
I have created a bot for my website and I currently host in on heroku.com. I run it by executing the command heroku run --app cghelper python bot.py This executes the command perfectly through CMD and runs that specific .py file in my github repo. The issue is when I close the cmd window this stops the bot.py. How can I get the to run automatically. Thanks
How do I add a multiline variable in a honcho .env file?
39,755,064
1
2
1,091
0
python,environment-variables
I put '\\' at the end of the line to permit multiline values.
0
1
0
0
2016-09-28T18:28:00.000
2
0.099668
false
39,755,063
1
0
0
1
I am trying to add a multiline value for an env var in .env so that my process, run by honcho, will have access to it. Bash uses a '\' to permit multilines. But this gives errors in honcho/python code. How to do this?
Configure system default python to use anaconda's packages
41,580,566
0
0
33
0
python,python-3.4,anaconda
update-alternatives will do this trick. Simply remember switching to Py3.4 when you need and switching it back after you finished!
0
1
0
0
2016-09-28T23:47:00.000
1
1.2
true
39,759,271
1
0
0
1
I have to keep using the system's default python3.4 on a centos server. I'm wondering if there is any way to configure system's python using anaconda's python packages:P
How can I order elements in a window in python apache beam?
39,776,373
6
3
1,665
0
python,google-cloud-dataflow,dataflow,apache-beam
There is not currently built-in value sorting in Beam (in either Python or Java). Right now, the best option is to sort the values yourself in a DoFn like you mentioned.
0
1
0
0
2016-09-29T03:04:00.000
2
1.2
true
39,760,733
0
0
1
1
I noticed that java apache beam has class groupby.sortbytimestamp does python have that feature implemented yet? If not what would be the way to sort elements in a window? I figure I could sort the entire window in a DoFn, but I would like to know if there is a better way.
How to isolate Anaconda from system python without set the shell path
39,787,575
1
3
766
0
python,linux,shell,anaconda,spyder
You could use virtualenv 1) create a virtual env using the python version you need for anaconda virtualenv -p /usr/bin/pythonX.X ~/my_virtual_env 2) virtualenv ~/my_virtual_env/bin/activate 3) Run anaconda, then deactivate
0
1
0
0
2016-09-30T05:39:00.000
2
1.2
true
39,784,418
1
0
0
1
I want to install Anaconda locally on my home directory ~/.Anaconda3 (Archlinux) and without setting the path in the shell because I like to keep my system python as the default. So I like to launch the Spyder (or other Anaconda's app) as isolated app from system binaries. I mean when I launch for example .Anaconda3/bin/spyder it launches spyder and this app uses Anaconda's binaries but when I use python ThisScript.py in my shell it uses system python installed from packages (e.g. /bin/python). I managed to update the anaconda using .Anaconda3/bin/conda update --all in my shell without setting the Anaconda's binaries path (.Anaconda/bin/) but thsi way run some apps like spyder doesn't work obviously.
Does Python's distutils set include paths for frameworks (osx) when compiling extensions?
39,820,702
0
0
34
0
python,distutils,macos-sierra
I have to pass cc -F /Library/Frameworks for clang 7.2.0 and 8.0.0. Then it can find the headers.
0
1
0
0
2016-10-02T16:38:00.000
1
1.2
true
39,819,220
0
0
0
1
I've been working on an extension module for Python but in OSX Sierra it no longer finds headers belonging to the frameworks I'm linking to. It always found them before without any special effort. Has something changed lately regarding include paths in this tool chain?
Using an existing python3 install with anaconda/miniconda
39,827,107
0
0
351
0
python,homebrew,anaconda,miniconda,homebrew-cask
Anaconda comes with python for you but do not remove the original python that comes with the system -- many of the operating system's libs depend on it. Anaconda manages its python executable and packages in its own (conda) directory. It changes the system path so the python inside the conda directory is the one used when you access python.
0
1
0
0
2016-10-03T07:48:00.000
1
0
false
39,826,735
1
0
0
1
With python3 previously installed via homebrew on macOS, I just downloaded miniconda (via homebrew cask), which brought in another full python setup, I believe. Is it possible to install anaconda/miniconda without reinstalling python? And, if so, would that be a bad idea?
after python 2.7 installation version still showing as 2.6.5
39,831,155
0
1
111
0
windows,python-2.7,windows-7
Need more explanation. Like from where you have downloaded the Python package binary? What was installation path when you have installed it? What was Python 2.6.5 installation path? Does old environment variable still present?
0
1
0
0
2016-10-03T11:14:00.000
1
0
false
39,830,272
1
0
0
1
I am using 64 bit window 7 version. Recently installed python 2.7 and was able to see python27 folder inside C drive. I even update environment variable to use C:\Python27 and C:\Python27\Scripts. python --version Python 2.6.5 which python /usr/bin/python How can i update system to use python 2.7 version?
Tracing instructions with GDB Python scripting
39,838,415
0
0
502
0
gdb,gdb-python
Yes, you can do this in gdb. Rather than trying to set a breakpoint on the next instruction, you can instead use the si command to single-step to the next instruction.
0
1
0
0
2016-10-03T17:58:00.000
1
0
false
39,837,656
1
0
0
1
I am trying to write a Python script for GDB to trace a function. The idea is to set a breakpoint on an address location, let the program run and then, when it breaks, log to file registers, vectors and stack and find out what address the next instruction will be, set a breakpoint on that location and rinse and repeat. I read through the documentation and I'm pretty confident registers, vectors and memory locations can be easily dumped. The actual problem is finding what the next instruction location will be as it requires to analyze the disassembly of the current instruction to determine where the next breakpoint should be placed. Update I am doing all this without using stepi or nexti because the target I'm debugging works only with hardware breakpoints and as far as I know those commands use software breakpoints to break at the next instruction Is there anything like that in GDB?
"'CXXABI_1.3.8' not found" in tensorflow-gpu - install from source
39,856,855
13
12
24,320
0
python,tensorflow
I solved this problem by copying the libstdc++.so.6 file which contains version CXXABI_1.3.8. Try run the following search command first: $ strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep CXXABI_1.3.8 If it returns CXXABI_1.3.8. Then you can do the copying. $ cp /usr/lib/x86_64-linux-gnu/libstdc++.so.6 /home/jj/anaconda2/bin/../lib/libstdc++.so.6
0
1
0
0
2016-10-04T05:30:00.000
4
1
false
39,844,772
0
1
0
1
I have re-installed Anaconda2. And I got the following error when 'python -c 'import tensorflow'' ImportError: /home/jj/anaconda2/bin/../lib/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by /home/jj/anaconda2/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so) environment CUDA8.0 cuDNN 5.1 gcc 5.4.1 tensorflow r0.10 Anaconda2 : 4.2 the following is in bashrc file export PATH="/home/jj/anaconda2/bin:$PATH" export CUDA_HOME=/usr/local/cuda-8.0 export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
Do we need to specify python interpreter externally if python script contains #!/usr/bin/python3?
39,862,114
1
2
46
0
python,c,system,interpreter
Make sure you have executable permission for python_script. You can make python_script executable by chmod +x python_script Also check if you are giving correct path for python_script
0
1
0
1
2016-10-04T21:20:00.000
1
1.2
true
39,861,960
0
0
0
1
I am trying to invoke python script from C application using system() call The python script has #!/usr/bin/python3 on the first line. If I do system(python_script), the script does not seem to run. It seems I need to do system(/usr/bin/python3 python_script). I thought I do not need to specify the interpreter externally if I have #!/usr/bin/python3 in the first line of the script. Am I doing something wrong?
Airflow does not trigger concurrent DAGs with `LocalExecutor`
40,569,324
2
2
1,233
0
python,airflow
I ran into this issue as well using the LocalExecutor. It seems to be a limitation in how the LocalExecutor works. The scheduler ends up spawning child processes (32 in your case). In addition, your scheduler performs 20 iterations per execution, so by the time it gets to the end of its 20 runs, it waits for its child processes to terminate before the scheduler can exit. If there is a long-running child process, the scheduler will be blocked on its execution. For us, the resolution was to switch to the CeleryExecutor. Of course, this requires a bit more infrastructure, management, and overall complexity for the Celery backend.
0
1
0
0
2016-10-04T22:14:00.000
1
0.379949
false
39,862,590
0
0
0
1
I am using airflow 1.7.1.3. I have an issue with concurrency DAGs / Tasks. When a DAG is running, the scheduler does not launch other DAGs any more. It seems that scheduler is totally frozen (no logs anymore) ... until the running DAG is finished. Then, the new DAGrun is triggered. My different tasks are long-running ECS task (~10 minutes) I used LocalExecutor and I let default config about parallelism=32 and dag_concurrency=16. I use airflow scheduler -n 20 and reboot it automatically and I set 'depends_on_past': False for all my DAGs declaration. For information, I deployed airflow in containers running in an ECS cluster. max_threads = 2 and I have only 2 CPU available. Any ideas ? Thanks
Best practice for sequential execution of group of tasks in Celery
39,879,475
0
0
361
0
django,celery,python-3.5
I would use a model. The user selects the tasks and orders them, creating records in the table. A celery task runs and executes the tasks from the table in the specified order.
0
1
0
0
2016-10-05T11:35:00.000
1
1.2
true
39,872,909
0
0
1
1
I have page that allows user to select tasks which should be executed in selected order, one by one. So, it create group of tasks. User can create several of them. For each group I should make possible to look on tasks progress. I'm looked into several things like chain, chord, group but it seems very tricky for me, and I don't see any possibility to look on each task progress. What's good solution for this kind of problem?
Ansible prompt_vars error: GetPassWarning: Cannot control echo on the terminal
39,901,858
0
0
941
0
python,docker,ansible
The error GetPassWarning: Cannot control echo on the terminal is raised by Python and indicates that the terminal you are using does not provide stdin, stdout and stderr. In this case its stderr. As there is not much information provided in the question I guess it is tried to use interactive elements like prompt_vars inside a Dockerfile which is IMHO not possible.
0
1
0
0
2016-10-06T15:40:00.000
1
0
false
39,900,282
0
0
0
1
I am using Ansible and Docker for automating the environment build process. I use prompt_vars to try to collect the username and password for the git repo but unfortunately i got this error: GetPassWarning: Cannot control echo on the terminal The docker ubuntu version is 14.04 and python version is 2.7
ipyparallel displaying "registration: purging stalled registration"
39,958,173
1
0
264
0
ipython,zeromq,pyzmq,ipython-parallel
If you are using --reuse, make sure to remove the files if you change settings. It's possible that it doesn't behave well when --reuse is given and you change things like --ip, as the connection file may be overriding your command-line arguments. When setting --ip=0.0.0.0, it may be useful to also set --location=a.b.c.d where a.b.c.d is an ip address of the controller that you know is accessible to the engines. Changing the If registration works and subsequent connections don't, this may be due to a firewall only opening one port, e.g. 5900. The machine running the controller needs to have all the ports listed in the connection file open. You can specify these to be a port-range by manually entering port numbers in the connection files.
0
1
0
0
2016-10-10T09:13:00.000
1
1.2
true
39,954,942
0
0
1
1
I am trying to use the ipyparallel library to run an ipcontroller and ipengine on different machines. My setup is as follows: Remote machine: Windows Server 2012 R2 x64, running an ipcontroller, listening on port 5900 and ip=0.0.0.0. Local machine: Windows 10 x64, running an ipengine, listening on the remote machine's ip and port 5900. Controller start command: ipcontroller --ip=0.0.0.0 --port=5900 --reuse --log-to-file=True Engine start command: ipengine --file=/c/Users/User/ipcontroller-engine.json --timeout=10 --log-to-file=True I've changed the interface field in ipcontroller-engine.json from "tcp://127.0.0.1" to "tcp://" for ipengine. On startup, here is a snapshot of the ipcontroller log: 2016-10-10 01:14:00.651 [IPControllerApp] Hub listening on tcp://0.0.0.0:5900 for registration. 2016-10-10 01:14:00.677 [IPControllerApp] Hub using DB backend: 'DictDB' 2016-10-10 01:14:00.956 [IPControllerApp] hub::created hub 2016-10-10 01:14:00.957 [IPControllerApp] task::using Python leastload Task scheduler 2016-10-10 01:14:00.959 [IPControllerApp] Heartmonitor started 2016-10-10 01:14:00.967 [IPControllerApp] Creating pid file: C:\Users\Administrator\.ipython\profile_default\pid\ipcontroller.pid 2016-10-10 01:14:02.102 [IPControllerApp] client::client b'\x00\x80\x00\x00)' requested 'connection_request' 2016-10-10 01:14:02.102 [IPControllerApp] client::client [b'\x00\x80\x00\x00)'] connected 2016-10-10 01:14:47.895 [IPControllerApp] client::client b'82f5efed-52eb-46f2-8c92-e713aee8a363' requested 'registration_request' 2016-10-10 01:15:05.437 [IPControllerApp] client::client b'efe6919d-98ac-4544-a6b8-9d748f28697d' requested 'registration_request' 2016-10-10 01:15:17.899 [IPControllerApp] registration::purging stalled registration: 1 And the ipengine log: 2016-10-10 13:44:21.037 [IPEngineApp] Registering with controller at tcp://172.17.3.14:5900 2016-10-10 13:44:21.508 [IPEngineApp] Starting to monitor the heartbeat signal from the hub every 3010 ms. 2016-10-10 13:44:21.522 [IPEngineApp] Completed registration with id 1 2016-10-10 13:44:27.529 [IPEngineApp] WARNING | No heartbeat in the last 3010 ms (1 time(s) in a row). 2016-10-10 13:44:30.539 [IPEngineApp] WARNING | No heartbeat in the last 3010 ms (2 time(s) in a row). ... 2016-10-10 13:46:52.009 [IPEngineApp] WARNING | No heartbeat in the last 3010 ms (49 time(s) in a row). 2016-10-10 13:46:55.028 [IPEngineApp] WARNING | No heartbeat in the last 3010 ms (50 time(s) in a row). 2016-10-10 13:46:55.028 [IPEngineApp] CRITICAL | Maximum number of heartbeats misses reached (50 times 3010 ms), shutting down. (There is a 12.5 hour time difference between the local machine and the remote VM) Any idea why this may happen?
Error "virtualenv : command not found" but install location is in PYTHONPATH
39,977,369
11
33
102,543
0
python,python-2.7,pip
As mentioned in the comments, you've got the virtualenv module installed properly in the expected environment since python -m venv allows you to create virtualenv's. The fact that virtualenv is not a recognized command is a result of the virtualenv.py not being in your system PATH and/or not being executable. The root cause could be outdated distutils or setuptools. You should attempt to locate the virtualenv.py file, ensure it is executable (chmod +x) and that its location is in your system PATH. On my system, virtualenv.py is in the ../Pythonx.x/Scripts folder, but this may be different for you.
0
1
0
0
2016-10-10T18:35:00.000
12
1
false
39,964,635
1
0
0
5
This has been driving me crazy for the past 2 days. I installed virtualenv on my Macbook using pip install virtualenv. But when I try to create a new virtualenv using virtualenv venv, I get the error saying "virtualenv : command not found". I used pip show virtualenv and the location of the installation is "Location: /usr/local/lib/python2.7/site-packages" but I can't figure out where the executable is. I tried dozens other similar looking posts but those solutions do not work for me. Any ideas what might be going wrong here?
Error "virtualenv : command not found" but install location is in PYTHONPATH
39,972,160
85
33
102,543
0
python,python-2.7,pip
The only workable approach I could figure out (with help from @Gator_Python was to do python -m virtualenv venv. This creates the virtual environment and works as expected. I have custom python installed and maybe that's why the default approach doesn't work for me.
0
1
0
0
2016-10-10T18:35:00.000
12
1.2
true
39,964,635
1
0
0
5
This has been driving me crazy for the past 2 days. I installed virtualenv on my Macbook using pip install virtualenv. But when I try to create a new virtualenv using virtualenv venv, I get the error saying "virtualenv : command not found". I used pip show virtualenv and the location of the installation is "Location: /usr/local/lib/python2.7/site-packages" but I can't figure out where the executable is. I tried dozens other similar looking posts but those solutions do not work for me. Any ideas what might be going wrong here?
Error "virtualenv : command not found" but install location is in PYTHONPATH
54,281,271
20
33
102,543
0
python,python-2.7,pip
On macOS Mojave First check python is in the path. python --version Second check pip is installed. pip --version If it is not installed. brew install pip Third install virtualenv sudo -H pip install virtualenv
0
1
0
0
2016-10-10T18:35:00.000
12
1
false
39,964,635
1
0
0
5
This has been driving me crazy for the past 2 days. I installed virtualenv on my Macbook using pip install virtualenv. But when I try to create a new virtualenv using virtualenv venv, I get the error saying "virtualenv : command not found". I used pip show virtualenv and the location of the installation is "Location: /usr/local/lib/python2.7/site-packages" but I can't figure out where the executable is. I tried dozens other similar looking posts but those solutions do not work for me. Any ideas what might be going wrong here?
Error "virtualenv : command not found" but install location is in PYTHONPATH
64,741,790
1
33
102,543
0
python,python-2.7,pip
Had the same problem on Windows. Command not found and can't find the executable in the directory given by pip show. Fixed it by adding "C:\Users{My User}\AppData\Roaming\Python\Python39\Scripts" to the PATH environment variable.
0
1
0
0
2016-10-10T18:35:00.000
12
0.016665
false
39,964,635
1
0
0
5
This has been driving me crazy for the past 2 days. I installed virtualenv on my Macbook using pip install virtualenv. But when I try to create a new virtualenv using virtualenv venv, I get the error saying "virtualenv : command not found". I used pip show virtualenv and the location of the installation is "Location: /usr/local/lib/python2.7/site-packages" but I can't figure out where the executable is. I tried dozens other similar looking posts but those solutions do not work for me. Any ideas what might be going wrong here?
Error "virtualenv : command not found" but install location is in PYTHONPATH
57,953,946
0
33
102,543
0
python,python-2.7,pip
I tried to have virtualenv at a random location & faced the same issue on a UBUNTU machine, when I tried to run my 'venv'. What solved my issue was :- $virtualenv -p python3 venv Also,instead of using $activate try :- $source activate If you look at the activate script(or $cat activate), you will find the same in comment.
0
1
0
0
2016-10-10T18:35:00.000
12
0
false
39,964,635
1
0
0
5
This has been driving me crazy for the past 2 days. I installed virtualenv on my Macbook using pip install virtualenv. But when I try to create a new virtualenv using virtualenv venv, I get the error saying "virtualenv : command not found". I used pip show virtualenv and the location of the installation is "Location: /usr/local/lib/python2.7/site-packages" but I can't figure out where the executable is. I tried dozens other similar looking posts but those solutions do not work for me. Any ideas what might be going wrong here?
"oauth2client.transport : Refreshing due to a 401" what exactly this log mean?
39,970,763
2
0
557
0
python,google-cloud-dataflow
As a general approach, you should try running the pipeline locally, using the DirectPipelineRunner on a small dataset to debug your custom transformations. Once that passes, you can use the Google Cloud Dataflow UI to investigate the pipeline state. You can particularly look at Elements Added field in the Step tab to see whether your transformations are producing output. In this particular job, there's a step that doesn't seem to be producing output, which normally indicates an issue in the user code.
0
1
0
0
2016-10-11T00:39:00.000
1
0.379949
false
39,968,806
0
0
0
1
I setup a job on google cloud dataflow, and it need more than 7 hours to make it done. My Job ID is 2016-10-10_09_29_48-13166717443134662621. It didn't show any error in pipeline. Just keep logging out "oauth2client.transport : Refreshing due to a 401". Is there any problem of my workers or there is something wrong. If so, how can I solve it?
Using 7zip with python to create a password protected file in a given path
39,982,764
1
0
589
0
python,linux,python-2.7,permissions,permission-denied
drwxr-xr-x means that: 1] only the directory's owner can list its contents, create new files in it (elevated access) etc., 2] members of the directory's group and other users can also list its contents, and have simple access to it. So in fact you don't have to change the directory's permissions unless you know what you are doing, you could just run your script with sudo like sudo python my_script.py.
0
1
0
1
2016-10-11T16:47:00.000
2
1.2
true
39,982,491
0
0
0
1
I'm having an error for what seems to be a permissions problem when trying to create a zip file in a specified folder testfolder -folder has the following permissions: drwxr-xr-x 193 nobody nobody When trying to launch the following command in python I get the following: p= subprocess.Popen(['7z','a','-pinfected','-y','/home/John/testfolder/yada.zip'] + ['test.txt'],stdout=PIPE.subprocess,stderr=PIPE.subprocess) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/usr/local/lib/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 13] Permission denied Any idea what wrong with permissions? I pretty new to it, my python runs from /usr/local/bin path
Can Anaconda be packaged for a portable zero-configuration install?
51,868,203
7
28
32,449
0
python,anaconda,conda,portability,miniconda
Since you mentioned WinPython as an option, but said you dismissed it for being 'too large': WinPython now includes a 'Zero' version with each release that has nearly all of the bloat removed (equivalent to the relationship between Miniconda and Anaconda). I believe the folder containing the WinPython-64bit v3.6.3.0Zero release clocked in around 50-100MB.
0
1
0
0
2016-10-11T18:51:00.000
3
1
false
39,984,611
1
0
0
1
I'd like to deploy Python to non-programmers in my organization such that the install process consists entirely of syncing a directory from Perforce and maybe running one batch file that sets up environment variables. Is it possible to package up Miniconda in such a way it can be "installed" just by copying down a directory? What does its installer do? The reason for this is that I'd like to automate certain tasks for our artists by providing them with Python scripts they can run from the commandline. But I need to get the interpreter onto their machines without having to run any kind of installer or uninstaller, or any process that can fail in a non-idempotent way. A batch file that sets up env vars is fine, because it is idempotent. An installer that can fail partway through and put the workstation into a state requiring intervention to fix is not. In particular, adding a library to everyone's install should consist of my using conda on my desk, checking the ensuing directory into P4, and then letting artists pick it up automatically with their next p4 sync. I looked at WinPython, but at 1.4GB it is too large. Portable Python is defunct. We are exclusively a Windows shop, so do not need Linux- or Mac-portable solutions.
How to use Anaconda Python to execute a .py file?
39,995,712
0
29
131,482
0
python,windows,anaconda
Anaconda should add itself to the PATH variable so you can start any .py file with "python yourpythonfile.py" and it should work from any folder. Alternatively download pycharm community edition, open your python file there and run it. Make sure to have python.exe added as interpreter in the settings.
0
1
0
0
2016-10-12T09:40:00.000
8
0
false
39,995,380
1
0
0
4
I just downloaded and installed Anaconda on my Windows computer. However, I am having trouble executing .py files using the command prompt. How can I get my computer to understand that the python.exe application is in the Anaconda folder so it can execute my .py files?
How to use Anaconda Python to execute a .py file?
54,141,774
4
29
131,482
0
python,windows,anaconda
Launch JupyterLab from Anaconda (Perform the following operation with JupyterLab ...) Click on icon folder in side menu Start up "Text File" Rename untitle.txt to untitle.py (The name of the file started up was also changed) Start up the "terminal" (In windows the power shell starts up) Execute the command python untitle.py
0
1
0
0
2016-10-12T09:40:00.000
8
0.099668
false
39,995,380
1
0
0
4
I just downloaded and installed Anaconda on my Windows computer. However, I am having trouble executing .py files using the command prompt. How can I get my computer to understand that the python.exe application is in the Anaconda folder so it can execute my .py files?
How to use Anaconda Python to execute a .py file?
68,916,916
0
29
131,482
0
python,windows,anaconda
If you get the following error: can't open file 'command.py': [Errno 2] No such file or directory Then follow this steps to fix it: Check that you are in the correct directory where the Python file is. If you are not in the correct directory, then change the current working directory with cd path. For instance: cd F:\COURSE\Files. Now that you are in the directory where your .py file is, run it with the command python app.py.
0
1
0
0
2016-10-12T09:40:00.000
8
0
false
39,995,380
1
0
0
4
I just downloaded and installed Anaconda on my Windows computer. However, I am having trouble executing .py files using the command prompt. How can I get my computer to understand that the python.exe application is in the Anaconda folder so it can execute my .py files?
How to use Anaconda Python to execute a .py file?
56,315,497
2
29
131,482
0
python,windows,anaconda
Right click on a .py file and choose 'open with' Scroll down through the list of applications and click something like 'use a different program' Naviage to C:\Users\<username>\AppData\Local\Continuum\anaconda3 click on python.exe and then click on 'ok' or 'open' Now when you double click on any .py file it will run it through Anaconda's interpreter and therefore run the python code. I presume if you run it through the command line the same would apply but perhaps someone could correct me?
0
1
0
0
2016-10-12T09:40:00.000
8
0.049958
false
39,995,380
1
0
0
4
I just downloaded and installed Anaconda on my Windows computer. However, I am having trouble executing .py files using the command prompt. How can I get my computer to understand that the python.exe application is in the Anaconda folder so it can execute my .py files?
Apache2 server run script as specific user
40,065,573
0
0
206
0
php,python,apache
It looks like I could use suEXEC. It is an Apache module that is not installed at default because they really don't want you to use it. It can be installed using the apt-get scheme. That said, I found the real answer to my issue, heyu uses the serial ports to do it's work. I needed to add www-data to the dialout group then reboot. This circumvented the need to run my code as me (as I had already add me to the dialout group a long time ago) in favor of properly changing the permissions. Thanks.
0
1
0
1
2016-10-13T01:00:00.000
2
0
false
40,010,657
0
0
1
1
I am using Ubuntu server 12.04 to run Apache2 web server. I am hosting several webpages, and most are working fine. One page is running a cgi script which mostly works (I have the python code working outside Apache building the html code nicely.) However, I am calling a home automation program (heyu) and it is returning different answers then when I run it in my user account. Is there a way I can... 1 call the heyu program from my python script as a specific user, (me) and leave the rest of the python code and cgi code alone? 2, configure apache2 to run the cgi code, as a whole, as me? I would like to leave all the other pages unchanged. Maybe using the sites_available part. 3, at least determine which user is running the cgi code so maybe I can get heyu to be OK with that user. Thanks, Mark.
PHP exec command is not returning full data from Python script
40,019,200
2
0
142
0
php,python,ssh,centos,exec
Perhaps it's caused by buffering of the output. Try adding the -u option to your Python command - this forces stdout, stdin and stderr to be unbuffered.
0
1
0
1
2016-10-13T10:55:00.000
1
1.2
true
40,019,042
0
0
0
1
I am connecting to a server through PHP SSH and then using exec to run a python program on that server. If I connect to that server through putty and execute the same command through command line, I get result like: Evaluating.... Connecting.... Retrieving data.... 1) Statement 1 2) Statement 2 . . . N) Statement N Python program is written by somebody else... When I connect through SSH php, I can execute $ssh->exec("ls") and get the full results as proper as on server command line. But when I try $ssh->exec("python myscript.py -s statement 0 0 0"); I couldn't get the full results but I get a random line as an ouput. In general, if somebody had experienced the same issue and solved, please let me know. Thanks
Run a python application/script on startup using Linux
40,033,097
2
2
87
0
python,linux
You can set up the script to run via cron, configuring time as @reboot With python scripts, you will not need to compile it. You might need to install it, depending on what assumptions your script makes about its environment.
0
1
0
0
2016-10-14T00:21:00.000
1
1.2
true
40,033,066
0
0
0
1
I've been learning Python for a project required for work. We are starting up a new server that will be running linux, and need a python script to run that will monitor a folder and handle files when they are placed in the folder. I have the python "app" working, but I'm having a hard time finding how to make this script run when the server is started. I know it's something simple, but my linux knowledge falls short here. Secondary question: As I understand it I don't need to compile or install this application, basically just call the start script to run it. Is that correct?
Updating Python version that's compiled from source
40,047,015
1
0
240
0
python,python-2.7,centos
Replacing 2.7.6 with 2.7.12 would be fine using the procedure you linked. There should be no real problems with libraries installed with pip easy_install as the version updates are minor. Worst comes to worst and there is a library conflict it would be because the python library used for compiling may be different and you can always reinstall the library which would recompile against the correct python library if required. This is only problematic if the library being installed is actually compiled against the python library. Pure python packages would not be affected. If you were doing a major version change this would be okay as well as on CentOS you have to call python with python2.7 and not python, so a new version would call with python2.8
0
1
0
0
2016-10-14T15:18:00.000
1
0.197375
false
40,046,656
1
0
0
1
I run a script on several CentOS machines that compiles Python 2.7.6 from source and installs it. I would now like to update the script so that it updates Python to 2.7.12, and don't really know how to tackle this. Should I do this exactly the same way, just with source code of higher version, and it will overwrite the old Python version? Should I first uninstall the old Python version? If so, then how? Sorry if this is trivial - I tried Googleing and searching through Stack, but did not found anyone with a similar problem.
Move python folder on linux
40,051,396
4
3
1,303
0
python,linux,python-3.5
Pip is a python script. Open it and see : it begins with #!/usr/bin/python You can either create a symbolic link in the old path to point to the new one, or replace the shebang with the new path. You can also keep your distrib interpreter safe by leaving it be and set the compiled one into a new virtualenv.
0
1
0
0
2016-10-14T20:06:00.000
1
1.2
true
40,051,205
1
0
0
1
I have compiled python sources with the --prefix option. After running make install the binaries are copied to a folder of my account's home directory. I needed to rename this folder but when I use pip after the renaming it says that it can't find the python interpreter. It shows an absolute path to the previous path (before renaming). Using grep I found out multiple references to absolute paths relative to the --prefix folder. I tried to override it by setting the PATH,PYTHONPATH and PYTHONHOME environment variables but it's not better. Is there a way to compile the python sources in a way that I can freely moves it after ?
Installing Google Cloud SDK with Python 2.7
42,702,977
2
2
2,759
0
python-2.7,shell,google-cloud-sdk
An additional thing to add to @cherba's answer: On Windows I found CLOUDSDK_PYTHON had to be a user level variable not a system level variable. (That's the first box if you're looking at windows system environment variables.)
0
1
0
0
2016-10-17T15:32:00.000
2
0.197375
false
40,090,368
1
0
0
1
I am trying to install the Google Cloud SDK which requires Python 2.7. I have both Python 3.5 and 2.7 with Anaconda. I am given a shell script and I would like to tell the shell script to use Python 2.7. How would I do this?
How to Make uWSGI die when it encounters an error?
40,096,953
2
1
221
0
python,uwsgi,supervisord
After an hour of searching, I finally found a way to do this. Just pass the --need-app argument when starting uWSGI, or add need-app = true in your .ini file, if you run things that way. No idea why this is off by default (in what situation would you ever want uWSGI to keep running when your app has died?) but so it goes.
0
1
0
0
2016-10-17T22:31:00.000
1
0.379949
false
40,096,695
0
0
1
1
I have my Python app running through uWSGI. Rarely, the app will encounter an error which makes it not be able to load. At that point, if I send requests to uWSGI, I get the error no python application found, check your startup logs for errors. What I would like to happen in this situation is for uWSGI to just die so that the program managing it (Supervisor, in my case) can restart it. Is there a setting or something I can use to force this? More info about my setup: Python 2.7 app being run through uWSGI in a docker container. The docker container is managed by Supervisor, and if it dies, Supervisor will restart it, which is what I want to happen.
How to retry before an exception with Eclipse/PyDev
40,130,262
1
1
39
0
python,eclipse,pydev
Unfortunately no, this is a Python restriction on setting the next line to be executed: it can't set the next statement after an exception is thrown (it can't even go to a different block -- i.e.: if you're inside a try..except, you can't set the next statement to be out of that block). You could in theory take a look at Python itself as it's open source and see how it handles that and make it more generic to handle your situation, but apart from that, what you want is not doable.
0
1
0
0
2016-10-18T13:09:00.000
1
0.197375
false
40,109,228
1
0
0
1
I am using Eclipse + PyDev, although I can break on exception using PyDev->Manage Exception Breakpoints, I am unable to continue the execution after the exception. What I would like to be able to do is to set the next statement before the exception so I can run a few commands in the console window and continue execution. If I use Eclipse -> Run -> Set Next Statement before the exception, the editor will show the next statement being where I set it but then when resuming the execution, the program will be terminated. Can this be done ?
Squish build for windows server 2008
46,297,701
0
0
55
0
python-3.x,squish
Squish binary packages generally work on any supported operating system (they have been compiled for).
0
1
0
0
2016-10-18T14:18:00.000
1
0
false
40,110,714
1
0
0
1
Does the squish build used in Windows 7 work for windows server 2008 as well? Or should I build squish separately for windows server?
Multiple instances of celerybeat for autoscaled django app on elasticbeanstalk
40,166,437
-3
13
1,251
0
python,django,celery,amazon-elastic-beanstalk,celerybeat
In case someone experience similar issues: I ended up switching to a different Queue / Task framework for django. It is called django-q and was set up and working in less than an hour. It has all the features that I needed and also better Django integration than Celery (since djcelery is no longer active). Django-q is super easy to use and also lighter than the huge Celery framework. I can only recommend it!
0
1
0
0
2016-10-19T00:48:00.000
2
1.2
true
40,120,312
0
0
1
2
I am trying to figure out the best way to structure a Django app that uses Celery to handle async and scheduled tasks in an autoscaling AWS ElasticBeanstalk environment. So far I have used only a single instance Elastic Beanstalk environment with Celery + Celerybeat and this worked perfectly fine. However, I want to have multiple instances running in my environment, because every now and then an instance crashes and it takes a lot of time until the instance is back up, but I can't scale my current architecture to more than one instance because Celerybeat is supposed to be running only once across all instances as otherwise every task scheduled by Celerybeat will be submitted multiple times (once for every EC2 instance in the environment). I have read about multiple solutions, but all of them seem to have issues that don't make it work for me: Using django cache + locking: This approach is more like a quick fix than a real solution. This can't be the solution if you have a lot of scheduled tasks and you need to add code to check the cache for every task. Also tasks are still submitted multiple times, this approach only makes sure that execution of the duplicates stops. Using leader_only option with ebextensions: Works fine initially, but if an EC2 instance in the enviroment crashes or is replaced, this would lead to a situation where no Celerybeat is running at all, because the leader is only defined once at the creation of the environment. Creating a new Django app just for async tasks in the Elastic Beanstalk worker tier: Nice, because web servers and workers can be scaled independently and the web server performance is not affected by huge async work loads performed by the workers. However, this approach does not work with Celery because the worker tier SQS daemon removes messages and posts the message bodies to a predefined urls. Additionally, I don't like the idea of having a complete additional Django app that needs to import the models from the main app and needs to be separately updated and deployed if the tasks are modified in the main app. How to I use Celery with scheduled tasks in a distributed Elastic Beanstalk environment without task duplication? E.g. how can I make sure that exactly one instance is running across all instances all the time in the Elastic Beanstalk environment (even if the current instance with Celerybeat crashes)? Are there any other ways to achieve this? What's the best way to use Elastic Beanstalk's Worker Tier Environment with Django?
Multiple instances of celerybeat for autoscaled django app on elasticbeanstalk
54,745,929
1
13
1,251
0
python,django,celery,amazon-elastic-beanstalk,celerybeat
I guess you could single out celery beat to different group. Your auto scaling group runs multiple django instances, but celery is not included in the ec2 config of the scaling group. You should have different set (or just one) of instance for celery beat
0
1
0
0
2016-10-19T00:48:00.000
2
0.099668
false
40,120,312
0
0
1
2
I am trying to figure out the best way to structure a Django app that uses Celery to handle async and scheduled tasks in an autoscaling AWS ElasticBeanstalk environment. So far I have used only a single instance Elastic Beanstalk environment with Celery + Celerybeat and this worked perfectly fine. However, I want to have multiple instances running in my environment, because every now and then an instance crashes and it takes a lot of time until the instance is back up, but I can't scale my current architecture to more than one instance because Celerybeat is supposed to be running only once across all instances as otherwise every task scheduled by Celerybeat will be submitted multiple times (once for every EC2 instance in the environment). I have read about multiple solutions, but all of them seem to have issues that don't make it work for me: Using django cache + locking: This approach is more like a quick fix than a real solution. This can't be the solution if you have a lot of scheduled tasks and you need to add code to check the cache for every task. Also tasks are still submitted multiple times, this approach only makes sure that execution of the duplicates stops. Using leader_only option with ebextensions: Works fine initially, but if an EC2 instance in the enviroment crashes or is replaced, this would lead to a situation where no Celerybeat is running at all, because the leader is only defined once at the creation of the environment. Creating a new Django app just for async tasks in the Elastic Beanstalk worker tier: Nice, because web servers and workers can be scaled independently and the web server performance is not affected by huge async work loads performed by the workers. However, this approach does not work with Celery because the worker tier SQS daemon removes messages and posts the message bodies to a predefined urls. Additionally, I don't like the idea of having a complete additional Django app that needs to import the models from the main app and needs to be separately updated and deployed if the tasks are modified in the main app. How to I use Celery with scheduled tasks in a distributed Elastic Beanstalk environment without task duplication? E.g. how can I make sure that exactly one instance is running across all instances all the time in the Elastic Beanstalk environment (even if the current instance with Celerybeat crashes)? Are there any other ways to achieve this? What's the best way to use Elastic Beanstalk's Worker Tier Environment with Django?
How to install openCV 2.4.13 for Python 2.7 on Ubuntu 16.04?
45,497,131
1
10
29,525
0
python-2.7,opencv,ubuntu
sudo apt-get install build-essential cmake git pkg-config sudo apt-get install libjpeg8-dev libtiff4-dev libjasper-dev libpng12-dev sudo apt-get install libgtk2.0-dev sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev sudo apt-get install libatlas-base-dev gfortran sudo apt-get install python2.7-dev sudo pip install numpy sudo apt-get install python-opencv Then you can have a try: $ python Python 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import cv >>> import cv2
0
1
0
0
2016-10-19T10:28:00.000
4
0.049958
false
40,128,751
0
1
0
1
I have tried a lot of online posts to install opencv but they are not working for Ubuntu 16.04. May anyone please give me the steps to install openCV 2.4.13 on it?
Celery message queue vs AWS Lambda task processing
48,643,501
24
12
7,335
0
python-2.7,amazon-web-services,nlp,celery,aws-lambda
I would like to share a personal experience. I moved my heavy-lifting tasks to AWS Lambda and I must admit that the ROI has been pretty good. For instance, one of my tasks was to generate monthly statements for the customers and then mail them to the customers as well. The data for each statement was fed into a Jinja template which gave me an HTML of the statement. Using Weasyprint, I converted the HTML to a Pdf file. And then mailing those pdf statements was the last step. I researched through various options for creating pdf files directly, but they didn't looked feasible for me. That said, when the scale was low, i.e. for when the number of customers was small, celery was wonderful. However to mention, during this task, I observed CPU usages went high. I would add to the celery queue this task for each of the customers, from which the celery workers would pick up the tasks and execute it. But when the scale went high, celery didn't turn out to be a robust option. CPU usages were pretty high(I don't blame celery for it, but that is what I observed). Celery is still good though. But do understand this, that with celery, you can face scaling issues. Vertical scaling may not help you. So you need horizontally scale as your backend grows to get get a good performance from celery. When there are a lot of tasks waiting in the queue, and the number of workers is limited, naturally a lot of tasks would have to wait. So in my case, I moved this CPU-intensive task to AWS Lambda. So, I deployed a function that would generate the statement Pdf from the customer's statement data, and mail it afterward. Immediately, AWS Lambda solved our scaling issues. Secondly, since this was more of a period task, not a daily task - so we didn't need to run celery everyday. The Lambda would launch whenever needed - but won't run when not in use. Besides, this function was in NodeJS, since the npm package I found turned out to be more efficient the solution I had in Python. So Lambda is also advantageous because you can take advantages of various programming languages, yet your core may be unchanged. Also, I personally think that Lambda is quite cheap - since the free tier offers a lot of compute time per month(GB-seconds). Also, underlying servers on which your Lambdas are taken care to be updated to the latest security patches as and when available. As you can see, my maintenance cost has drastically dropped. AWS Lambdas scale as per need. Plus, they can serve a good use case for tasks like real-time stream processing, or for heavy data-processing tasks, or for running tasks which could be very CPU intensive.
0
1
0
1
2016-10-21T09:50:00.000
1
1.2
true
40,173,481
0
0
1
1
Currently I'm developing a system to analyse and visualise textual data based on NLP. The backend (Python+Flask+AWS EC2) handles the analysis, and uses an API to feed the result back to a frontend (FLASK+D3+Heroku) app that solely handles interactive visualisations. Right now the analysis in the prototype is a basic python function which means on large files the analysis take longer and thus resulting a request timeout during the API data bridging to frontend. As well as the analysis of many files is done in a linear blocking queue. So to scale this prototype, I need to modify the Analysis(text) function to be a background task so it does not block further execution and can do a callback once the function is done. The input text is fetched from AWS S3 and the output is a relatively large JSON format aiming to be stored in AWS S3 as well, so the API bridge will simply fetch this JSON that contains data for all the graphs in the frontend app. (I find S3 slightly easier to handle than creating a large relational database structure to store persistent data..) I'm doing simple examples with Celery and find it fitting as a solution, however i just did some reading in AWS Lambda which on paper seems like a better solution in terms of scaling... The Analysis(text) function uses a pre-built model and functions from relatively common NLP python packages. As my lack of experience in scaling a prototype I'd like to ask for your experiences and judgement of which solution would be most fitting for this scenario. Thank you :)
Can i use Docker for creating exe using pyinstaller
40,177,468
-7
5
6,314
0
python,windows,macos,docker,pyinstaller
I don't think so. Your docker container container will be a Linux system. If you run it, when ever you are on windows/mac/linux, it is still running on a linux environement so you it will not be a windows or mac compatible binary. I don't know well of python. But if you can't make windows binary from linux, you will not be able to do so in a container.
0
1
0
0
2016-10-21T13:01:00.000
2
1.2
true
40,177,368
1
0
0
1
I am supposed to create an executable for windows, mac and linux. However, I don't have a windows machine for time being and also I don't have a mac at all. I do have a Linux machine but I don't want to change the partition or even create a dual boot with windows. I have created an application using python and am making my executable using pyinstaller. If I make use of Docker (install images of windows and mac on linux), will I be able to create executable for windows and mac with all dependencies (like all .dll for windows and if any similar for mac)?
use python 3.4 instead of python 3.5
49,835,752
2
0
585
0
python-3.x,python-3.4,python-3.5
Variant A: Run this script as python3.4 /path/to/script Variant B: Change the schebang to #!/usr/bin/python3.4
0
1
1
0
2016-10-23T15:03:00.000
1
1.2
true
40,204,380
0
0
0
1
I have a script that I found on the internet that worked in Python 3.4, but not Python 3.5. I'm not too familiar in python, but it has the #!/usr/bin/env python3 schlebang at the top of the file. And it also throws this exception when I try to run it: Traceback (most recent call last): File "/home/username/folder/script.py", line 18, in doc = opener.open(url) File "/usr/lib/python3.5/urllib/request.py", line 472, in open response = meth(req, response) File "/usr/lib/python3.5/urllib/request.py", line 582, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib/python3.5/urllib/request.py", line 504, in error result = self._call_chain(*args) File "/usr/lib/python3.5/urllib/request.py", line 444, in _call_chain result = func(*args) File "/usr/lib/python3.5/urllib/request.py", line 968, in http_error_401 url, req, headers) File "/usr/lib/python3.5/urllib/request.py", line 921, in http_error_auth_reqed return self.retry_http_basic_auth(host, req, realm) File "/usr/lib/python3.5/urllib/request.py", line 931, in retry_http_basic_auth return self.parent.open(req, timeout=req.timeout) File "/usr/lib/python3.5/urllib/request.py", line 472, in open response = meth(req, response) File "/usr/lib/python3.5/urllib/request.py", line 582, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib/python3.5/urllib/request.py", line 510, in error return self._call_chain(*args) File "/usr/lib/python3.5/urllib/request.py", line 444, in _call_chain result = func(*args) File "/usr/lib/python3.5/urllib/request.py", line 590, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) Python isn't really my preferred langage, so I don't know what to do. This is a script that's supposed to access my Gmail account and pull new mails from it. Do you guys have any suggestions? I'm using Arch Linux, if that helps.
sublime text 3: How to open command console with correct python version
40,207,826
0
0
230
0
python,cmd,console,sublimetext3
Tools > Build System > New Build System You can set it up there and choose your python binary
0
1
0
1
2016-10-23T21:04:00.000
1
0
false
40,207,785
0
0
0
1
I have multiple python version installed in Ubuntu OS. I've set up sublime text 3 (ST3) to run python2 code successfully. But I would like to have the option to run python code using command console during debugging as well. But I found that when I open the cmd console, the python version that it used is not the same of the python that I'm building my python code. To be exact, cmd console called python3, while I would like it to use python2. Any way to set which default python that the cmd console call? Thanks.
How to install pydotplus for Python 3.5 on Windows64
56,009,752
1
2
6,170
0
python,windows,python-3.x,conda,pydot
Try running anaconda prompt as 'administrator', then use: conda install -c conda-forge pydotplus
0
1
0
0
2016-10-23T23:04:00.000
4
0.049958
false
40,208,695
1
0
0
1
What is a proven method for installing pydotplus for Python 3.5 on a 64-bit Windows(10) system? So far I haven't had any luck using conda or a number of other approaches. It appears there are several viable options for both Linux Ubuntu and Windows for Python 2.7. Unfortunately it's necessary for me to use this particular configuration, so any suggestions would be greatly appreciated!
Why can't I install cPickle | Pip needs upgrading?
40,224,304
2
0
4,792
0
python,pycharm,pickle
As suggested in the comments, this is most likely because Python is not added to your environment variables. If you do not want to touch your environment variables, and assuming your Python is installed in C:\Python35\, Navigate tp C:\Python35\ in Windows Explorer Go to the address bar and type cmd to shoot up a command prompt in that directory Alternatively to steps 1 and 2, directly shoot up a command prompt, and cd to your Python installation Directory (default: C:\Python35) Type python -m pip install pip --upgrade there
0
1
0
0
2016-10-24T17:07:00.000
1
1.2
true
40,223,807
1
0
0
1
When trying to install cPickle using pycharm I get this: Command "python setup.py egg_info" failed with error code 1 in C:\Users\Edwin\AppData\Local\Temp\pycharm-packaging\cpickle You are using pip version 7.1.2, however version 8.1.2 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. So then when I go command prompt to type in: python -m pip install --upgrade pip I get this: 'python' is not recognized as an internal or external command, operable program or batch file. So how do I install cPickle? BTW: I am using windows & python 3.5.1
Using wingIDE with a new module (not recognized)
40,230,900
0
1
313
0
python-3.x,python-imaging-library,wing-ide
Most likely Wing is using a different Python than the one you installed Pillow into. Try this on the command line: import sys; sys.executable Then set the Python Executable in Wing's Project Properties to the full path of that executable (or in Wing 101 this is set in the Configure Python dialog from the Edit menu). You'll need to restart any debug process and/or the Python Shell from its Options menus.
0
1
0
0
2016-10-25T02:47:00.000
1
0
false
40,230,578
1
0
0
1
Using my terminal, the code "from PIL import Image" works perfectly and is recognized by my computer. This allows me to get images using the path address. Here is my issue, when I open wingIDE and try the same code...this module isn't recognized. Is anyone familiar with wingIDE that can help me? I would assume PyCharm people might have the same issue with possibly a similar fix, any advice?? Thanks, Adam
PyInstaller Not Work Python 3.5
40,260,017
0
3
571
0
python,pyinstaller
Try copletely uninstalling Python and then re-installing it. Programming can be crazy complicated, but sometimes it's as simple as that.
0
1
0
0
2016-10-26T08:15:00.000
1
0
false
40,257,045
1
0
0
1
I want to turn myProgram.py into an executable program. When i run: pyinstaller --onefile --windowed myProgram.py I have this error: OSError: Python library not found: .Python, libpython3.5.dylib, Python This would mean your Python installation doesn't come with proper library files. This usually happens by missing development package, or unsuitable build parameters of Python installation. How can I fix the problem?
pip.exe has stopped working
40,283,551
0
0
2,538
0
python,pip
I had the same problem before and the solution is quite simple. First try updating pip via command: pip install --upgrade pip If that doesn't work try uninstalling current version of python and reinstalling the newest version. Note1: Do not just delete install files and files in your C drive ,uninstall everything packages, everything that might cause problems, especially delete old python packages and extensions they might not work with the newest python version and that might be the problem. You can see in python website which packages and extensions are supported. Note2: Do not and I repeat DO NOT install .msi or .exe extensions they don't work anymore always use .whl (wheel) files. If you have one .msi or .exe uninstall them form your system completely; that also means that you have to uninstall them from command prompt. Note3: Always check if the .whl is compatible with your Python version. Note4: Also don't forget to save your projects before doing anything. Hope that works :D Happy Coding.
0
1
0
0
2016-10-27T10:53:00.000
1
1.2
true
40,282,779
1
0
0
1
I know this question has been asked a few times before, but none of the answers I've read have managed to solve my problem. When I try to run any of the following, I get an error saying "pip.exe has stopped working: easy_install pip pip3 It was working for me previously (the last time I used it was probably a month ago), but not anymore. I'm using Python 3.4.4, I checked the PATH and it's configured correctly. Does anyone know what else might be causing the issue?
How to efficiently fan out large chunks of data into multiple concurrent sub-processes in Python?
40,400,385
1
0
506
0
python,multiprocessing,shared-memory,python-multiprocessing
In Unix this might be tractable because fork() is used for multiprocessing, but in Windows the fact that spawn() is the only way it works really limits the options. However, this is meant to be a multi-platform solution (which I'll use mainly in Windows) so I am working within that constraint. I could open the data source in each subprocess, but depending on the data source that can be expensive in terms of bandwidth or prohibitive if it's a stream. That's why I've gone with the read-once approach. Shared memory via mmap and an anonymous memory allocation seemed ideal, but to pass the object to the subprocesses would require pickling it - but you can't pickle mmap objects. So much for that. Shared memory via a cython module might be impossible or it might not but it's almost certainly prohibitive - and begs the question of using a more appropriate language to the task. Shared memory via the shared Array and RawArray functionality was costly in terms of performance. Queues worked the best - but the internal I/O due to what I think is pickling in the background is prodigious. However, the performance hit for a small number of parallel processes wasn't too noticeable (this may be a limiting factor on faster systems though). I will probably re-factor this in another language for a) the experience! and b) to see if I can avoid the I/O demands the Python Queues are causing. Fast memory caching between processes (which I hoped to implement here) would avoid a lot of redundant I/O. While Python is widely applicable, no tool is ideal for every job and this is just one of those cases. I learned a lot about Python's multiprocessing module in the course of this! At this point it looks like I've gone as far as I can go with standard CPython, but suggestions are still welcome!
0
1
0
0
2016-10-30T02:04:00.000
1
1.2
true
40,325,437
1
0
0
1
[I'm using Python 3.5.2 (x64) in Windows.] I'm reading binary data in large blocks (on the order of megabytes) and would like to efficiently share that data into 'n' concurrent Python sub-processes (each process will deal with the data in a unique and computationally expensive way). The data is read-only, and each sequential block will not be considered to be "processed" until all the sub-processes are done. I've focused on shared memory (Array (locked / unlocked) and RawArray): Reading the data block from the file into a buffer was quite quick, but copying that block to the shared memory was noticeably slower. With queues, there will be a lot of redundant data copying going on there relative to shared memory. I chose shared memory because it involved one copy versus 'n' copies of the data). Architecturally, how would one handle this problem efficiently in Python 3.5? Edit: I've gathered two things so far: memory mapping in Windows is cumbersome because of the pickling involved to make it happen, and multiprocessing.Queue (more specifically, JoinableQueue) is faster though not (yet) optimal. Edit 2: One other thing I've gathered is, if you have lots of jobs to do (particularly in Windows, where spawn() is the only option and is costly too), creating long-running parallel processes is better than creating them over and over again. Suggestions - preferably ones that use multiprocessing components - are still very welcome!
Windows 8 Python doesn't work
67,953,206
0
0
167
0
python
I faced the same problem, and I did find the missing files in a directory under C:\windows\WinSxS, just do a lookup for the required file and then paste all the files in that directory in C:\Windows\System32. That solved the problem for me.
0
1
0
0
2016-10-30T07:56:00.000
3
0
false
40,327,068
1
0
0
1
I installed python 3.5.2 on Windows 8.1, I executed the python-3.5.2-amd64.exe installer. Nothing bad happened. I was searching the Python35 folder in C:\ , but actually is in C:\Users\USER\AppData\Local\Programs\Python\Python35 I opened python.exe and I got an error: api-ms-win-crt-runtime-l1-1-0.dll is missing. How can I make it works? I already have installed Microsoft Visual C++ 2012 Redistributable (x64) - 11.0.50727 and so on. Thank you in advance.
Python Pyro running on Linux to open a COM object on a remote windows machine, is it possible?
40,347,576
0
1
433
0
python,linux,windows,com,pyro
Yes this is a perfect use case for Pyro, to create a platform independent wrapper around your COM access code. At least I assume you have some existing Python code (using ctypes or pywin32?) that is able to invoke the COM object locally? You wrap that in a Pyro interface class and expose that to your linux box. I think the only gotcha is that you have to make sure you pythoncom.CoInitialize() things properly in your Pyro server class to be able to deal with the multithreading in the server, or use the non-threaded multiplex server.
0
1
0
1
2016-10-31T02:34:00.000
1
0
false
40,335,862
0
0
1
1
I have a project that requires the usage of COM objects running on a windows machine. The machine running the Python Django project is on a Linux box. I want to use Pyro and the django App to call COM objects on the remote windows machine. Is it possible? Any suggestion is appreciated?
Anaconda not found in ZSh?
56,141,998
0
54
80,553
0
python,macos,ipython,anaconda,zsh
I had a similar issue after I installed anaconda3 in ubuntu. This is how I solved it: 1) I changed to bash and anaconda can work 2) I changed to zsh, and anaconda works. I don't know why, but I think you can try.
0
1
0
0
2016-11-02T00:01:00.000
13
0
false
40,370,467
1
0
0
2
I installed Anaconda via command line. The bash file. If Im in bash, I can open and use anaconda, like notebooks, ipython, etc. If I change my shell to ZSH, all the anaconda commands appear like "not found". How I can make it work in zsh? I use a Mac with OSx Sierra. Thanks in advance,
Anaconda not found in ZSh?
62,925,457
3
54
80,553
0
python,macos,ipython,anaconda,zsh
From their docs (This worked for me): If you are on macOS Catalina, the new default shell is zsh. You will instead need to run source <path to conda>/bin/activate followed by conda init zsh. For my specific installation (Done by double clicking the installer), this ended up being source /opt/anaconda3/bin/activate
0
1
0
0
2016-11-02T00:01:00.000
13
0.046121
false
40,370,467
1
0
0
2
I installed Anaconda via command line. The bash file. If Im in bash, I can open and use anaconda, like notebooks, ipython, etc. If I change my shell to ZSH, all the anaconda commands appear like "not found". How I can make it work in zsh? I use a Mac with OSx Sierra. Thanks in advance,
Screen command and python
40,372,327
2
1
5,021
0
python,linux
Solved! Here's how: 1) In terminal, after SSHing into the remote machine, type 'which python' (thanks @furas!). This gives path/to/Canopy/python 2) In terminal, type 'screen path/to/Canopy/python program.py' to run the desired program (called program.py) in the Canopy version of python.
0
1
0
0
2016-11-02T03:19:00.000
2
1.2
true
40,371,956
1
0
0
1
I want to run a process (a python program) on a remote machine. I have both Canopy and Anaconda installed. After I SSH into the remote machine, if I type 'python', I get the python prompt - the Canopy version. If I type 'screen', hit 'enter', then type 'python', I get the python prompt - the Anaconda version. I want to use the Canopy version when I'm in 'screen'. How can I do so?
pip3 stopped installing executables into /usr/local/bin
40,386,891
1
1
1,216
0
python,macos,pip
Resolved the problem. Turns out that this is Homebrew's behavior. I must have recently ran brew upgrade and it installed a newer version of python3. It seems that something got weird with re-linking the new python3, so all binaries for the new installs ended up somewhere deep in /usr/local/Cellar/python3. I expect that re-linking python3 would solve this, but I ended up removing all versions of python3 and reinstalling. After that all I had to do was re-install any and all packages that had binary files in them. Not sure if this is the intended behavior or a bug in python3 package.
0
1
0
0
2016-11-02T16:14:00.000
1
1.2
true
40,384,700
1
0
0
1
Suddenly, my pip install commands stopped installing binaries into /usr/local/bin. I tried to upgrade pip to see if that might be the problem, it was up to date and a forced re-install deleted my /usr/local/pip3 and didn't install it back, so now I have to use python3 -m pip to do any pip operations. I am running OS X Sierra with the latest update (that is the main thing that changed, so I think the OS X upgrade might have caused this) with python3 installed by homebrew. How do I fix this? Edit: I am still trying to work this out. python3 -m pip show -f uwsgi actually shows the uwsgi binary as being installed to what amounts to /usr/local/bin (it uses relative paths). Yet the binary is not there and reinstalling doesn't put it there and doesn't produce any errors. So either pip records the file in its manifest, but doesn't actually put it there or the OS X transparently fakes the file creation (did Apple introduce some new weird security measures?)
Communicating between multiple pox controllers
43,265,462
0
0
1,009
0
python,sdn,pox
POX is not a distributed controller. I would really recommend you to migrate immediately to ONOS or opendaylight. You would implement your solution on top of ONOS.
0
1
1
0
2016-11-02T16:17:00.000
2
0
false
40,384,775
0
0
0
1
I am developing a load balancing between multiple controllers in sdn. Once a load is calculated on a controller-1 I need to migrate some part of that to controller-2. I have created the topology using mininet and running 2 remote pox controllers one on 127.0.0.1:6633 and other on 127.0.0.1:6634.How do I communicate between these controllers? How can I send load information of controller-1 to controller-2 and migrate some flows there?
Running R Models using rpy2 interface on Docker. I am facing issue related to opening the port
41,900,410
0
0
59
0
python,r,docker,rpy2
Finally resolve this problem myself. This is very python script specific problem. In R command call from python, just need to change TBATS and BATS function. (very specific problem if someone works with R timeseries library)
0
1
0
0
2016-11-03T16:59:00.000
1
0
false
40,407,263
0
0
1
1
First, I ran R models in windows system using rpy2 python interface. It's running fine. Then, I migrate it to linux environment using docker. Now I'm executing same code with Docker run command, facing "rpy2.rinterface.RRuntimeWarning:port 11752 cannot be opened ". Note: my application running four R models using rpy2. That means it's create four robjects. So I think same time they are using same port. However, I'm not sure. Help in this issue really appreciable. Thanks in advance.
Transfer Data from Click Event Between Bokeh Apps
40,429,660
1
0
157
0
javascript,python,cookies,bokeh
The cookies idea might work fine. There are a few other possibilities for sharing data: a database (e.g. redis or something else, that can trigger async events that the app can respond to) direct communication between the apps (e.g. with zeromq or similiar) The Dask dashboard uses this kind of communication between remote workers and a bokeh server. files and timestamp monitoring if there is a shared filesystem (not great, but sometimes workable in very simple cases) Alternatively if you can run both apps on the same single server (even though they are separate apps) then you could probably communicate by updating some mutable object in a module that both apps import. But this would not work in a scale-out scenario with more than one Bokeh server running. Any/all of these somewhat advanced usages, an working example would make a great contribution for the docs so that others can use them to learn from.
0
1
0
0
2016-11-04T15:03:00.000
1
1.2
true
40,425,856
0
0
1
1
I have two Bokeh apps (on Ubuntu \ Supervisor \ Nginx), one that's a dashboard containing a Google map and another that's an account search tool. I'd like to be able to click a point in the Google map (representing a customer) and have the account search tool open with info from the the point. My problem is that I don't know how to get the data from A to B in the current framework. My ideas at the moment: Have an event handler for the click and have it both save a cookie and open the account web page. Then, have some sort of js that can read the cookie and load the account. Throw my hands up, try to put both apps together and just find a way to pass it in the back end.
Extension for Python
40,432,570
-1
1
38
0
python
Essentially system calls interact with the underlying system services(that is the Kernel for Linux). C functions on the other hand run on user space exclusively. To that sense system call is more "special".
0
1
0
0
2016-11-04T22:20:00.000
1
-0.197375
false
40,432,499
1
0
0
1
It is written in a documentation: Such extension modules can do two things that can’t be done directly in Python: they can implement new built-in object types, and they can call C library functions and system calls. Syscalls I cannot see why "system calls" are special here. I know what it is syscall. I didn't see why it is special and why it cannot be done directly in Python. Especially, we can use open in Python to open a file. It must be a underlying syscall to get descriptor for file ( in Unix systems). It was just open. Besides that we can use: call(["ls", "-l"]) and it also must use syscall like execve or something like that. Functions Why is calling C library function is special? After all: ctypes is a foreign function library for Python. It provides C compatible data types, and allows calling functions in DLLs or shared libraries. It can be used to wrap these libraries in pure Python.
Running flask app on DigitalOcean: should I keep the ssh console open all the time?
40,433,397
0
1
282
0
python,ssh,flask,digital-ocean
You needn't keep the console on, the app will still running after you close the console on your computer. But you may need to set a log to monitor it.
0
1
0
1
2016-11-04T23:45:00.000
3
0
false
40,433,243
0
0
1
1
I created a droplet that runs a flask application. My question is when I ssh into the droplet and restart the apache2 server, do I have to keep the console open all the time (that is I should not shut down my computer) for the application to be live? What if I have a dynamic application that runs scripts in the background, do I have to keep the console open all the time for the dynamic parts to work? P.S: there's a similar question in SO about a NodeJs app but some parts of the answer they provided are irrelevant to my Flask app.
Using the inline Debugger of PyCharm when running a bash-Script (.sh) within the PyCharm Terminal
40,469,976
0
2
1,370
0
python,bash,shell,debugging,pycharm
Does the script need to be run from bash? If not you could add a new Python run configuration (Run -> Edit configurations...). This can be run in PyCharms debug mode and will stop at breakpoints defined in the GUI. Rather than having to use set_trace, you can toggle the 'Show Python Prompt' button in console view to get a prompt so you can interact with the programme at the breakpoint.
0
1
0
0
2016-11-07T15:28:00.000
1
1.2
true
40,468,809
1
0
0
1
I wonder if there is any possibility to use the PyCharm inline Debugger when I run a script out of the Terminal. So I hope to do the following: Set breakpoint in PyCharm Editor Run ./script.sh -options from Terminal When script.sh calls a pyfile.py Python script with the breakpoint in it, it should stop there Giving me the possibility to use the visual debugging features of PyCharm The above does not work right now. My only chance is to do: import pdb pdb.set_trace() Then I could work with the pdb - but clearly I don't want to miss the great visual capabilities of the PyCharm Debugger. I saw that PyCharm uses pydevd instead of pdb. Is there maybe a similar possibility to invoke pydevd and work with the visual debugging then? Thank you for your help in advance. Best regards, Manuel
Google App Engine ndb memcache when to use memcache
40,473,578
0
0
148
0
python,google-app-engine
One case would be inside a transaction in which you want to read some related entity values but you don't care about accessing those particular entities consistently or not (in the context of that transaction). In such case reading from the datastore would unnecessarily include those related entities in the transaction which contributes to datastore contention and could potentially cause exceeding various per-transaction limits. Reading memcached values for those related entities instead would not include the entities in the transaction itself. Now I'm not 100% certain if this is applicable to ndb's memcache copy of an entity (I don't even know how to access that), I used my own memcache copies of such entities, updated whenever I modify these entities.
0
1
0
0
2016-11-07T17:27:00.000
1
1.2
true
40,471,023
0
0
1
1
If read/writes into the ndb datastore automatically caches both in-context and via memcache, in what cases would you want to call the memcache api directly (in the context of the datastore)? To elaborate, would I ever need to set the memcache for a particular datatstore read/write and get reads from the memcache instead of the datastore directly?
ways to avoid previous reload tornado
40,517,136
0
0
47
0
python,angularjs,templates,tornado
This is not really a Tornado question, as this is simply how Web works. One possible solution is to have only one form, but display its fields so that they look like two forms; in addition, have two separate submit buttons, each with its own name and value. Now, when you click on either button the whole form will be submitted, but in the handler you can process only the fields associated with the clicked button, while still displaying values in all the fields.
0
1
0
0
2016-11-08T08:31:00.000
1
0
false
40,482,242
0
0
1
1
I have two forms, when I submit form#1 I get some corresponding file, but when I submit form#2 thenafter, the corresponding file gets shown but form#1 goes empty. So basically I want some thing like a SPA(e.g angular) but I am taking form#1 and form#2 as separate requests routes and each render my index.html every time, so form#2 is wiped off when I submit form#1 and vice-versa. I dont want a working code but any ideas on how I do that with Tornado (not angular, or say Tornado + Angular ? ) I think one way for example is to handle these requests via a controller and do an AJAX post to corresponding Tornado Handler, which after the file is rendered, displays / serves the very file back again. But this uses AngularJS as a SPA. Any other solution possible? Thanks in Advance
can`t upgrade pip to the newest version 9.0.1 (OS:ubuntu 16.04LTS)
47,112,240
1
25
84,686
0
python,ubuntu,pip
cannot install pip 9 for python3 on ubuntu16 with pip or pip3 solve by: sudo apt-get upgrade python3-pip (here may be run the apt update first.) pip3 -V pip 9.0.1 from /home/roofe/.local/lib/python3.5/site-packages (python 3.5) roofe@utnubu:~$ pip install --upgrade pip Collecting pip Downloading pip-9.0.1-py2.py3-none-any.whl (1.3MB) 100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.3MB 14kB/s Installing collected packages: pip Successfully installed pip-9.0.1 note: the upper command only successly installed for python2. roofe@utnubu:~$ pip3 install --upgrade pip3 Collecting pip3 Could not find a version that satisfies the requirement pip3 (from versions: ) No matching distribution found for pip3 You are using pip version 8.1.1, however version 9.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. roofe@utnubu:~$ pip install --upgrade pip3 Collecting pip3 Could not find a version that satisfies the requirement pip3 (from versions: ) No matching distribution found for pip3 You are using pip version 8.1.1, however version 9.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command.
0
1
0
0
2016-11-08T11:10:00.000
12
0.016665
false
40,485,380
1
0
0
3
OS: ubuntu 16.04LTS Python: 2.7.12 + Anaconda2-4.2.0 (64 bit) I typed pip install --upgrade $TF_BINARY_URL to install tensorflow but terminal showed that my pip verson was 8.1.1, however version 9.0.1 is available. Then I typed pip install --upgrade pip to upgrade but it showed Requirement already up-to-date: pip in ./anaconda2/lib/python2.7/site-packages, I still can't use pip version 9.0.1 to install tensorflow. Does anyone know what's going on ??
can`t upgrade pip to the newest version 9.0.1 (OS:ubuntu 16.04LTS)
49,766,580
3
25
84,686
0
python,ubuntu,pip
If you're only installing things to one user account it is also possible to use pip install --user --upgrade pip avoiding the question of to sudo or not to sudo... just be careful not to use that account with system wide installation of pip goodies.
0
1
0
0
2016-11-08T11:10:00.000
12
0.049958
false
40,485,380
1
0
0
3
OS: ubuntu 16.04LTS Python: 2.7.12 + Anaconda2-4.2.0 (64 bit) I typed pip install --upgrade $TF_BINARY_URL to install tensorflow but terminal showed that my pip verson was 8.1.1, however version 9.0.1 is available. Then I typed pip install --upgrade pip to upgrade but it showed Requirement already up-to-date: pip in ./anaconda2/lib/python2.7/site-packages, I still can't use pip version 9.0.1 to install tensorflow. Does anyone know what's going on ??
can`t upgrade pip to the newest version 9.0.1 (OS:ubuntu 16.04LTS)
41,006,251
44
25
84,686
0
python,ubuntu,pip
sudo -H pip install --upgrade pip sudo is "super user do". This will allow you to execute commands as a super user. The H flag tells sudo to keep the home directory of the current user. This way when pip installs things, like pip itself, it uses the appropriate directory.
0
1
0
0
2016-11-08T11:10:00.000
12
1
false
40,485,380
1
0
0
3
OS: ubuntu 16.04LTS Python: 2.7.12 + Anaconda2-4.2.0 (64 bit) I typed pip install --upgrade $TF_BINARY_URL to install tensorflow but terminal showed that my pip verson was 8.1.1, however version 9.0.1 is available. Then I typed pip install --upgrade pip to upgrade but it showed Requirement already up-to-date: pip in ./anaconda2/lib/python2.7/site-packages, I still can't use pip version 9.0.1 to install tensorflow. Does anyone know what's going on ??
Determining object class in ndb PolyModel Google App Engine
40,512,917
2
1
42
0
python,google-app-engine,data-modeling
Per comments above it looks like what you need is isinstance(user, Employee) / isinstance(user, Manager).
0
1
0
0
2016-11-08T15:41:00.000
1
1.2
true
40,490,923
0
0
0
1
It looks like the ndb.polymodel.PolyModel class used to have a class_name() method but as far as I can tell it has been deprecated. I have a data structure using polymodel that is in the form of a parent User class with two child classes - Employee and Manager, and I want to do some basic checks throughout to determine if the User object is of the class Employee or class Manager. At the moment, I am just calling the object's .__class__.__name__ attribute directly, but I am wondering why the PolyModel.class_name() method was deprecated. Is there a better way to determine class inheritance?
Change default system file copier in Linux
40,493,312
2
0
205
0
python,linux,copy-paste
Short answer: no, you can't. Long answer: the component that does "copy&paste" is not alone defined by the distribution. This is a function of the desktop system / window manager. In other words: there is no such thing as the "default system file" copier for "Linux". There are file mangers like dolphin for KDE; or nautilus on gnome that all come with their own implementation of file copy. Some good, some not so much (try copying a whole directory with thousands of files with nautilus). But the real question here: why do you want to do that? What makes you think that your file-copy implementation that requires an interpreter to run ... is suited to replace the defaults that come with Linux? Why do you think that your re-invention of an existing wheel will be better at doing anything?! Edit: if your reason to "manage" system copy is some attempt to prevent the user from doing certain things ... you should better look into file permissions and such ideas. Within a Linux environment, you simply can't manage what the user is doing in the first place by manipulating some tools. Instead: understand the management capabilities that the OS offers to you, and use those!
0
1
0
1
2016-11-08T17:05:00.000
1
0.379949
false
40,492,606
0
0
0
1
I have coded a python app to manage file copies on linux, I want to know how can I get it to process copy/paste calls, like those launched by pressing ctrl + c/ ctrl + v or right click / Copy..., or drag and drop, instead of using system copier. Can I do this for all deb based linux dist. or its on different ways for Ubuntu, Mint, Debian, and so on???? Forgive my English and thanks in advance!
How to create non persisted tasks in Luigi?
40,868,113
0
2
232
0
python,luigi
I would go ahead and create an unique output for the task, even if the output is not used in your further processing. It would just be a marker that the task with the particular set of inputs has completed successfully. You could do a simple FileTarget, a PostgresTarget, etc.
0
1
0
0
2016-11-09T12:40:00.000
1
1.2
true
40,507,261
0
0
0
1
As part of a Luigi pipeline we want to notify microservices waiting for the data being computed using a POST request. Until now we were using the RunAnywayTarget but it is a problem if we fire up Luigi faster than the rate of data change. So my question is, what is the best pattern to create a task that does something in the pipeline but that doesn't create any piece of data, like doing a POST request to a REST service, send a message to Kafka, etc...? I know that I could create a task with no output that does the request in the run method, but then how should this NotificationTask be re-run again if for some reason the end service failed during the first run? The dependencies will be there and it won't be run again.
To pause and resume the program execution in the command prompt
40,510,507
1
0
1,809
0
python,python-2.7,logic,command-prompt
If you are in Linux, you could pause the program with Ctrl-Z (and either resume it with fg, or send it to continue its work in background with bg). Considering you use command prompt, I assume you are on Windows, there's no method I know of. You might try to use a new 'cmd' window and minimize it (maybe change its priority from Task Manager)
0
1
0
0
2016-11-09T15:02:00.000
1
1.2
true
40,510,025
1
0
0
1
Is there any combination of keys to pause and resume the program execution in the command prompt? As i have a big program to run, it takes 30 mins to complete the execution, it will be helpful if i can pause and resume to stop in the middle of the program and to resume it when it needed.
How do I change default Python version from 3.5 to 2.7 in Anaconda
48,596,296
0
3
6,310
0
python,python-2.7,python-3.x,centos,anaconda
If you are looking to change the python interpreter in anaconda from 3.5 to 2.7 for the user, try the command conda install python=2.7
0
1
0
0
2016-11-11T20:22:00.000
2
0
false
40,555,625
1
0
0
1
Without root access, how do I change the default Python from 3.5 to 2.7 for my specific user? Would like to know how to run Python scripts with Python 2 as well. If I start up Python by running simply python then it runs 3.5.2. I have to specifically run python2 at the terminal prompt to get a version of python2 up. If I run which python, then /data/apps/anaconda3/bin/python gets returned and I believe Python 2.7 is under /usr/bin/python. This is on CentOS if that helps clarify anything
Python communicate into VM windows app
41,398,029
1
0
688
0
python,virtual-machine,ipc,virtualization
Host<->VM communication on Windows host can be implemented in several ways, independently of hypervisor you are using: Host Only network - just assign static IP for host and machine, and use sockets api to transfer your data via virtual network. Very good for large amount of data, but require a little bit time for configuration. Virtual COM ports - if you don't want to use sockets api and want to write data to files(on linux VM)/named pipes(on windows host). This can be simpler because require almost zero configuration, but it will not work very well with large amount of data. Choose what will fit your needs.
0
1
0
0
2016-11-14T15:19:00.000
1
1.2
true
40,592,116
1
0
0
1
How can I setup a virtualized Ubuntu on real Windows so I can have two apps communicating simple messages between them? VM can be offline, no internet access. Real system probably offline too.
Access to log files created by snakemake rules
40,606,186
3
0
1,476
0
python,glob,snakemake
With the upcoming release 3.9.0, you can see the corresponding log files for all output files when invoking snakemake --summary.
0
1
0
0
2016-11-15T05:49:00.000
2
0.291313
false
40,602,894
0
0
0
1
is there a way to programmatically list log-files created per rule from within the Snakefile? Will I have to tap into the DAG and if yes, how? Background: I'd like to bundle up and remove all created log-files (only cluster logs are in a separate folder; some output files have correspondingly called log files). For this I want to be specific and exclude log files that might have been created by run programs and that coincidentally match a log glob. Are there alternatives, e.g. would parsing shellcmd_tracking files be easier? Thanks, Andreas
aiflow max active dag runs reached
52,749,395
0
2
4,999
0
python,airflow
This means that the dag_concurrency is set to a smaller number of concurrent task than you are trying to use. there is a file called airflow.cfg where you can change some execution configurations, one is the dag_concurrency, it is probably set to '6' increase it to your necessity, just make sure it doesn't exceed the number of parallelism that may cause problems.
0
1
0
0
2016-11-15T12:26:00.000
2
0
false
40,609,839
0
0
0
1
As soon as Airflow starts, None of the dag runs for a particular dag are executed and reaches maximum active dags. Even after setting the dag run state to success, Scheduler doesn't seem to move the newly scheduled task to execution state. All the active dag runs remain in 'running' state. This happens for some dags alone. Can someone please help on this?
Build Python 2.7.12 on a Mac with Intel compiler
41,964,143
1
1
109
0
macos,python-2.7,build,intel
You could edit the line in getcompiler.c that it is complaining about: e.g. to return "[Intel compiler]"; If you wanted to get fancier you could add in the compiler version, using e.g. the __INTEL_COMPILER macro.
0
1
0
0
2016-11-16T09:54:00.000
1
0.197375
false
40,628,945
1
0
0
1
I've been trying to build Python from source on my mac with the Intel compiler suite (Intel Parallel Studio) and link it against Intel's MKL. The reason for that is that I want to use the exactly the same environment on my mac for developing Python code as on our linux cluster. As long as I am not telling the configure script to use Intel's parallel studio, Python builds fine (configure and make: ./configure --with(out)-gcc). But as soon as I include --with-icc, or if I set the appropriate environment variables, mentioned in ./configure --help, to the Intel compilers and linkers, make fails with: icc -c -fno-strict-aliasing -fp-model strict -g -O2 -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -o Python/getcompiler.o Python/getcompiler.c Python/getcompiler.c(27): error: expected a ";" return COMPILER; ^ compilation aborted for Python/getcompiler.c (code 2) make: *** [Python/getcompiler.o] Error 2 I've searched everywhere, but nobody seems to be interested in building Python on a mac with intel compilers, or I am the only one who has problems with it. I've also configured my environment according to Intel's instructions: source /opt/intel/bin/compilervars.sh intel64, in ~/.bash_profile. In any case, my environment is: OS X 10.11.6 Xcode 8.1 / Build version 8B62 Intel Parallel Studio XE 2017.0.036 (C/C++, Fortran) Thanks, François
How can I execute a command with "sudo" in a python/bash/ruby script via SSH?
40,630,667
1
1
210
0
python,linux,bash,ssh
The proper way of doing this is using a configuration management solution, like ansible, You can use become directive to run script/command with sudo privileges from a remote client. If there is only one box or you need to schedule, you can use /etc/crontab and run it with root user at desired interval.
0
1
0
1
2016-11-16T11:08:00.000
1
0.197375
false
40,630,602
0
0
0
1
On a remote server I can execute a certain command only via "sudo". The authentication is done via a certificate. So when I login via SSH by a bash script, python, ruby or something else, how can I execute a certain command with sudo?