Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Python celery - tuples in arguments converted to lists | 35,532,031 | 1 | 4 | 951 | 0 | python,redis,celery | If you need to preserve the python native data structure I'd recommend using one of the serialization modules such a cPickle which will preserve the data structure but won't be readable outside of Python. | 0 | 1 | 0 | 0 | 2016-02-19T20:16:00.000 | 2 | 1.2 | true | 35,514,183 | 1 | 0 | 0 | 1 | I noticed this when using the delay() function to asynchronously send tasks. If I queue a task such as task.delay(("tuple",)), celery will store the argument as ["tuple"] and later the function will get the list back and not the tuple. Guessing this is because the data is being stored into json.
This is fine for tuples, however I'm using namedtuples which can no longer be referenced properly once converted to a list. I see the obvious solution of switching the namedtuples out with dicts. Is there any other method? I couldn't seem to find anything in the configuration for celery.
I'm using redis as the broker. |
Mulitple Python Consumer Threads on a Single Partition with Kafka 0.9.0 | 35,533,318 | 1 | 0 | 793 | 0 | apache-kafka,kafka-python | I'm actually not sure for Kafka 0.9, haven't yet had the need to go over the new design thoroughly, but AFAIK this wasn't possible in v8.
It certainly wasn't possible with the low-level consumer, but I also think that, if you assign more threads than you have partitions in the high-level consumer, only one thread per partition would be active at any time. This is why we say that parallelism in Kafka is determined by the number of partitions (which can be dynamically increased for a topic).
If you think about it, that would require coordination on the message level between the consuming threads, which would be detrimental to performance. Consumer groups in v0.8 were used to make the thread -> partition assignment a responsibility of Kafka, not to coordinate multiple threads over a single partition.
Now, it could be that this changed in 0.9, but I doubt that very much.
[EDIT] Now that I'm reading your question once again, I hope I understood your question correctly. I mean, having multiple consumers (not consumer threads) per partition is a regular thing (each has its own offset), so I assumed you were asking about threads/partitions relationship. | 0 | 1 | 0 | 0 | 2016-02-19T23:18:00.000 | 1 | 1.2 | true | 35,516,849 | 0 | 0 | 0 | 1 | For context, I am trying to transfer our python worker processes over to a kafka (0.9.0) based architecture, but I am confused about the limitations of partitions with respect to the consumer threads. Will having multiple consumers on a partition cause the other threads on the same partition to wait for the current thread to finish? |
Django deployed project not running subprocess shell command | 35,524,148 | 4 | 1 | 309 | 0 | python,django,shell,subprocess,gunicorn | 1) User who runs gunicorn has no permissions to run .sh files
2) Your .sh file has no rights to be runned
3) Try to user full path to the file
Also, which error do you get when trying to run it on the production? | 0 | 1 | 0 | 0 | 2016-02-20T13:37:00.000 | 1 | 1.2 | true | 35,524,022 | 0 | 0 | 1 | 1 | I have a django 1.9 project deployed using gunicorn with a view contains a line
subprocess.call(["xvfb-run ./stored/all_crawlers.sh "+outputfile+" " + url], shell=True, cwd= path_to_sh_file)
which runs fine with ./manage.py runserver
but fails on deployment and (deployed with gunicorn and wsgi).
Any Suggestion how to fix it? |
Is there any way for lsof to show the entire argv array instead of just argv[0] | 35,546,294 | 1 | 3 | 150 | 0 | python,unix,lsof | If you know the PID (eg. 12345) of the process, you can determine the entire argv array by reading the special file /proc/12345/cmdline. It contains the argv array separated by NUL (\0) characters. | 0 | 1 | 0 | 0 | 2016-02-22T02:39:00.000 | 1 | 0.197375 | false | 35,544,961 | 0 | 0 | 0 | 1 | I currently have a python script that accomplishes a very useful task in a large network. What I do is use lsof -iTCP -F and a few other options to dump all listening TCP sockets. I am able to get argv[0], this is not a problem. But to get the full argv value, I need to then run a ps, and map the PIDs together, and then merge the full argv value from ps into the record created by lsof. This feels needlessly complex, and for 10000+ hosts, in Python, it is very slow to merge this data.
Is there any way to show the full argv value w/lsof? I have read the manual and I couldn't find anything, so I am not too hopeful there is any way to do this. Sure, I could write a patch for lsof, but then I'd have to deploy it to 10000+ systems, and that's a non-starter at this point.
Also, if anyone has any clever ways to deal with the processing in Python such that it doesn't take 10 minutes to merge the data, I'd love to know. Currently, I load all the lsof and ps data into a dict where the key is (ip,pid) and then I merge them. I then create a new dict using the data in the merged dict where the key is (ip,port). This is really slow because the first two processes require iterating over all the lsof data. This is probably not a question, but I figured I'd throw it in here. My only idea at this point is to count processers and spawn N subprocesses, each with a chunk of the data to parse, then return them all back to the parent. |
Celery worker not consuming task and not retrieving results | 47,490,225 | 0 | 0 | 1,693 | 0 | python,celery | Regarding the AttributeError message, adding a backend config setting similar to below should help resolve it:
app = Celery('tasks', broker='pyamqp://guest@localhost//', backend='amqp://') | 0 | 1 | 0 | 0 | 2016-02-22T18:32:00.000 | 2 | 0 | false | 35,561,176 | 0 | 0 | 0 | 1 | I'm using Celery with RabbitMQ as the broker and redis as the result backend. I'm now manually dispatching tasks to the worker. I can get the task IDs as soon as I sent the tasks out. But actually Celery worker did not work on them. I cannot see the resulted files on my disk. And later when I want to use AsyncResult to check the results, of course I got AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'
I checked RabbitMQ and redis, they're both working (redis-cli ping). The log also says Connected to amqp://myuser:**@127.0.0.1:5672/myvhost.
Another interesting thing is that I actually have another remote server consuming the tasks connected to the broker. It also logs "Connected to amqp", but the two the nodes cannot see each other: mingle: searching for neighbors, mingle: all alone. The system worked before. I wonder where should I start to look for clues. Thanks. |
Websocket client on linux cuts off response after 8192 bytes | 37,220,484 | 0 | 1 | 91 | 0 | python,linux,sockets | So it turns out the problem came from the provided websocket module from google cloud sdk. It has a bug where after 8192 bytes it will not continue to read from the socket. This can be fixed by supplying the websocket library maintained by Hiroki Ohtani earlier on your PYTHONPATH than the google cloud sdk. | 0 | 1 | 1 | 0 | 2016-02-23T01:14:00.000 | 1 | 1.2 | true | 35,567,020 | 0 | 0 | 0 | 1 | I've created a docker image based on Ubuntu 14.04 which runs a python websocket client to read from a 3rd party service that sends variable length JSON encoded strings down. I find that the service works well until the encoded string is longer than 8192 bytes and then the JSON is malformed, as everything past 8192 bytes has been cut off.
If I use the exact same code on my mac, I see the data come back exactly as expected.
I am 100% confident that this is an issue with my linux configuration but I am not sure how to debug this or move forward. Is this perhaps a buffer issue or something even more insidious? Can you recommend any debugging steps? |
Is Python support for FreeBSD as good as for say CentOS/Ubuntu/other linux flavors? | 35,946,582 | 2 | 5 | 982 | 0 | python,pip,freebsd | The assumption that powerful and high-profile existing python tools use a lot of different python packages almost always holds true. We use FreeBSD in our company for quite some time together with a lot of python based tools (web frameworks, py-supervisor, etc.) and we never ran into the issue that a certain tool would not run on freeBSD or not be available for freeBSD.
So to answer your question:
Yes, all/most python packages are available on FreeBSD
One caveat:
The freeBSD ports system is really great and will manage all compatibility and dependency issues for you. If you are using it (you probably should), then you might want to avoid pip. We had a problem in the past where the package manager for ruby did not really play well with the ports database and installed a lot of incompatible gems. This was a temporary issue with rubygems but gave us a real headache. We tend to install everything from ports since then and try to avoid 3rd party package managers like composer, pip, gems, etc. Often the ports invoke the package managers but with some additional arguments so they ensure not to break dependencies. | 0 | 1 | 0 | 1 | 2016-02-23T07:59:00.000 | 2 | 0.197375 | false | 35,571,862 | 0 | 0 | 0 | 1 | The development environment, we use, is FreeBSD. We are evaluating Python for developing some tools/utilities. I am trying to figure out if all/most python packages are available for FreeBSD.
I tried using a CentOS/Ubuntu and it was fairly easy to install python as well as packages (using pip). On FreeBSD, it was not as easy but may be I'm not using the correct steps or am missing something.
We've some tools/utilities on FreeBSD that run locally and I want Python to interact with them - hence, FreeBSD.
Any inputs/pointers would be really appreciated.
Regards
Sharad |
GDB Error Installation error: gdb.execute_unwinders function is missing | 47,475,156 | 12 | 14 | 10,678 | 0 | python-2.7,debugging,gdb | I have the same, with gdb 8.0.1 compiled on Ubunutu 14.04 LST.
Turns out the installation misses the necessary Python files. One indication was that "make install" stopped complaining about makeinfo being missing - although I did not change any of the .texi sources.
My fix was to go into into the build area, into gdb/data-directory, and do "make install" once more, which installed the missing python scripts.
Must be some weird tool-bug somewhere. | 0 | 1 | 0 | 0 | 2016-02-23T10:49:00.000 | 2 | 1.2 | true | 35,575,425 | 1 | 0 | 0 | 1 | I have suddenly started seeing this message on nearly every GDB output line whilst debugging:
Python Exception Installation error: gdb.execute_unwinders function is missing
What is this? How do I rectify it? |
cannot import yaml on mac | 35,584,960 | 1 | 1 | 2,463 | 0 | python,import,module,pyyaml | You should be able to run import yaml if you installed pyyaml.
Did you try pip install pyyaml? | 0 | 1 | 0 | 0 | 2016-02-23T17:56:00.000 | 2 | 0.099668 | false | 35,584,780 | 1 | 0 | 0 | 2 | The software I have requires yaml, based on the import yaml at the top. I installed pyyaml on the mac I am using and it still threw the import error. I tried to change the code in the program to import pyyaml but that still didn't help. Any idea what the module is called to import it? If you need more information just ask. |
cannot import yaml on mac | 56,364,791 | -1 | 1 | 2,463 | 0 | python,import,module,pyyaml | For python2:
sudo yum install python-yaml
For python3:
sudo yum install python3-yaml | 0 | 1 | 0 | 0 | 2016-02-23T17:56:00.000 | 2 | 1.2 | true | 35,584,780 | 1 | 0 | 0 | 2 | The software I have requires yaml, based on the import yaml at the top. I installed pyyaml on the mac I am using and it still threw the import error. I tried to change the code in the program to import pyyaml but that still didn't help. Any idea what the module is called to import it? If you need more information just ask. |
Python multiprocessing pool number of jobs not correct | 35,588,384 | 0 | 1 | 1,011 | 0 | python,parallel-processing,multiprocessing,pool | Python, before starts the execution of the process that you specify in applyasync/asyncmap of Pool, assigns to each worker a piece of the work.
For example, lets say that you have 8 files to process and you start a Pool with 4 workers.
Before starting the file processing, two specific files will be assigned to each worker. This means that if some worker ends its job earlier than the others, will simply "have a break" and will not start helping the others. | 0 | 1 | 0 | 0 | 2016-02-23T21:06:00.000 | 2 | 0 | false | 35,588,159 | 1 | 0 | 0 | 1 | I wrote a python program to launch parallel processes (16) using pool, to process some files. At the beginning of the run, the number of processes is maintained at 16 until almost all files get processed. Then, for some reasons which I don't understand, when there're only a few files left, only one process runs at a time which makes processing time much longer than necessary. Could you help with this? |
GVIM crashes when running python | 35,620,795 | 10 | 7 | 1,440 | 0 | python,vim,crash | Finally solved the problem.
It turned out that Python uses PYTHONPATH variable to resolve the PYTHON folder (used to load python libraries and so on). Here is the default value for Python 2.7:
C:\Python27\Lib;C:\Python27\DLLs;C:\Python27\Lib\lib-tk
The variable can be set using one of the following:
1. Windows registry
Set the default value of HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Python\PythonCore\2.7\PythonPath key
2. Environment variable
Create environment variable PYTHONPATH and set the value (same as you edit global PATH)
3. _vimrc file
This is the most portable way. Edit your _vimrc (i.e. open vim and enter :e $MYVIMRC command) and set the variable:
let $PYTHONPATH = "C:\\Python27\\Lib;C:\\Python27\\DLLs;C:\\Python27\\Lib\\lib-tk" | 0 | 1 | 0 | 1 | 2016-02-24T08:41:00.000 | 2 | 1.2 | true | 35,597,157 | 0 | 0 | 0 | 1 | I cannot use python in GVIM. When I type:
:python print 1 it just closes GVIM without any message. I triend to run it with -V90logfile but I couldn't find any information about the crash.
GVIM is compiled with python (:version shows +python/dyn +python3/dyn).
GVIM version: 7.3.46 (32 bit with OLE).
Python version: 2.7.3
Initially GVIM couldn't find python27.dll so I edited $MYVIMRC and added:
let $Path = "C:\\Program Files (x86)\\Python27;".$Path
Both GVIM and Python have been installed using corporate standards - not manually via installers. Asking here as IT were not able to help me and redirected to external support.
I could reproduce the error on my personal computer, where I copied both GVIM & PYTHON without installing them. Any further suggestions? |
Django virtual environment disaster | 35,637,631 | 2 | 1 | 108 | 0 | python,django,bash | I assume you're using virtualenv. If so, do you know where it put the bin directory? If so, run source bin/activate. After that, when you try runserver, it should use the correct Python instance.
More complete:
source /path/to/bin/activate
But I typically run source bin/activate from the directory that contains the related bin. | 0 | 1 | 0 | 0 | 2016-02-25T20:20:00.000 | 2 | 0.197375 | false | 35,637,516 | 0 | 0 | 1 | 1 | I am working on a django app on my macbook with Yosemite.
My app was in a virtual environment.
I restarted my terminal and when I cd'd to my app it was no longer in the virtual environment and now doesn't run. And all my virtual environment commands give me -bash: command not found.
I fully recognize this is a very noobie question but I really want to work on my app and I have tried everything I could find on google and stackoverflow.
Please help.
Preferably with the commands I need to type from my command line - thank you! |
Python Code Line Endings | 35,639,400 | 1 | 1 | 885 | 0 | python,line-endings | In general newlines (as typically used in Linux) are more portable than carriage return and then newline (as used in Windows). Note also that if you store your code on GitHub or another Git repository, it will convert it properly without you having to do anything. | 0 | 1 | 0 | 0 | 2016-02-25T22:06:00.000 | 1 | 0.197375 | false | 35,639,333 | 1 | 0 | 0 | 1 | Which line endings should be used for platform independent code (not file access)?
My next project must run on both Windows and Linux.
I wrote code on Linux and used Hg to clone to Windows and it ran fine with Linux endings. The downside is that if you open the file in something other than a smart editor the line endings are not correct. |
Using homebrew to install pyinstaller | 36,139,384 | 9 | 1 | 3,559 | 0 | python,python-2.7,homebrew,pyinstaller | The pyinstaller docs are poorly worded and you may be misunderstanding their meaning.
PyInstaller works with the default Python 2.7 provided with current
Mac OS X installations. However, if you plan to use a later version of
Python, or if you use any of the major packages such as PyQt, Numpy,
Matplotlib, Scipy, and the like, we strongly recommend that you
install THESE using either MacPorts or Homebrew.
It means to say "install later versions of Python as well as python packages with Homebrew", and not to say "install pyinstaller itself with homebrew". In that respect you are correct, there is no formula for pyinstaller on homebrew.
You can install pyinstaller with pip though: pip install pyinstaller or pip3 install pyinstaller. Then confirm the install with pyinstaller --version. | 0 | 1 | 0 | 0 | 2016-02-26T17:48:00.000 | 1 | 1.2 | true | 35,658,436 | 1 | 0 | 0 | 1 | I am using python 2.7.0 and pygame 1.9.1, on OS X 10.10.5. The user guide for PyInstaller dictates that Mac users should use Homebrew, and I have it installed. I used it to install both Python and Pygame. But 'brew install PyInstaller' produces no formulae at all when typed into Terminal! So how can I use homebrew to install PyInstaller? This seems like it should be simple, and I'm sorry to bother you, but I have searched high and low with no result. |
Python script with -m completes but errors out at very end | 35,659,549 | 4 | 1 | 25 | 0 | python,virtualenv | cron.knightly.py is not what you want. Python modules do not end with .py. Just as you wouldn't import math.py, you don't run python -m something.py. Change it to python -m cron.nightly | 0 | 1 | 0 | 0 | 2016-02-26T18:50:00.000 | 1 | 1.2 | true | 35,659,462 | 1 | 0 | 0 | 1 | I run a script with inside a virtual environment like so:
python -m cron.nightly.py
Everything runs fine, but after the last line completes, I get an error:
/Users/user/.virtualenvs/vrn/bin/python: No module named cron.nightly.py
Which is fine, except that because the script doesn't exit with a 0 (I think) every time it runs Jenkins marks the job as failed and so I can't tell when the code actually fails or not without looking at each individual console output, which is not ideal to say the least.
If someone could help me explain why I get this error (there's no other traceback) and how to fix it I would really appreciate it. |
Uninstall different versions of python | 35,667,217 | -2 | 4 | 2,491 | 0 | python,python-2.7,python-3.x,anaconda | You can try going to:
/Library/Python
and manually delete the versions you do not want. This isn't recommended though. | 0 | 1 | 0 | 0 | 2016-02-27T07:50:00.000 | 2 | -0.197375 | false | 35,667,182 | 1 | 0 | 0 | 1 | I made a mistake and installed many different versions of python on my linux machine. I installed all the versions of python with the help of anaconda. My default python version shows 2.7.11.
Now I want to remove all the versions of python and its dependencies from my linux system. What should I do? |
Cannot upgrade pip in cygwing running on Windows 10: /usr/bin/python: No module named pip | 39,111,999 | 0 | 1 | 136 | 0 | python,cygwin,pip | Likely you don't need the python -m part. If pip is in your path, then just typing pip install --upgrade pip should work. Where is pip installed? which pip will tell you where it's located, if it's in your path | 0 | 1 | 0 | 0 | 2016-02-28T18:31:00.000 | 1 | 0 | false | 35,686,522 | 1 | 0 | 0 | 1 | In cygwin I can't upgrade pip, it worked find in cmd:
$ python -m pip install --upgrade pip
/usr/bin/python: No module named pip |
.sh started by cron does not create file via python | 35,688,902 | 0 | 1 | 43 | 0 | python,shell,cron | Difficult to answer without more colour on your environment.
Here's how to solve this though: do not redirect your output to /dev/null. Then read in your cron log what happened. It seems very likely that your script fails, and therefore does not return anything to standard out, so does not create a file.
I highly suspect it is because you are using a python module, or python version or python path that is loaded in your bashrc. Crontab does not execute your bashrc, it's an independent environment, so you cannot assume that a script that runs correctly when you manually launch will work in your cron.
Try sourcing your bashrc in your cron task, and it's very likely to solve your problem. | 0 | 1 | 0 | 1 | 2016-02-28T21:37:00.000 | 1 | 0 | false | 35,688,599 | 0 | 0 | 0 | 1 | I have this .sh which starts a python file. This python file generates a .txt when started via the commandline with sudo but doesn't when started via the .sh
Why doesn't the pyhton file give me a .txt when started with the cron and .sh?
When I use su -c "python /var/www/html/readdht.py > /var/www/html/dhtdata.txt" 2>&1 >/dev/null, .sh gives me output, but omits the newlines, so I get one big string.
The python file creates a .txt correctly when started from the commandline with sudo python readdht.py.
If the .sh the python file is started with su -c "python /var/www/html/readdht.py no .txt is created
What's going on? |
How to check if it is a file or folder for an archive in python? | 35,690,298 | 0 | 7 | 20,515 | 0 | python,zip,archive | I got the answer. It is that we can use two commands: archive.getall_members() and archive.getfile_members().
We iterate over each of them and store the file/folder names in two arrays a1(contains file/folder names) and a2(contains file names only). If both the arrays contain that element, then it is a file otherwise it is a folder. | 0 | 1 | 0 | 0 | 2016-02-29T00:19:00.000 | 4 | 0 | false | 35,690,072 | 1 | 0 | 0 | 1 | I have an archive which I do not want to extract but check for each of its contents whether it is a file or a directory.
os.path.isdir and os.path.isfile do not work because I am working on archive. The archive can be anyone of tar,bz2,zip or tar.gz(so I cannot use their specific libraries). Plus, the code should work on any platform like linux or windows. Can anybody help me how to do it? |
Celery task history | 38,764,411 | 5 | 5 | 4,109 | 0 | python,celery,flower | You can use the persisent option,eg: flower -A ctq.celery --persistent=True | 0 | 1 | 0 | 0 | 2016-03-01T15:32:00.000 | 2 | 0.462117 | false | 35,726,948 | 0 | 0 | 1 | 1 | I am building a framework for executing tasks on top of Celery framework.
I would like to see the list of recently executed tasks (for the recent 2-7 days).
Looking on the API I can find app.backend object, but cannot figure out how to make a query to fetch tasks.
For example I can use backends like Redis or database. I do not want to explicitly write SQL queries to database.
Is there a way to work with task history/results with API?
I tried to use Flower, but it can only handle events and cannot get history before its start. |
Trying to install PyAudio on OS X ( 10.11.3) | 35,867,245 | 2 | 2 | 345 | 0 | python,macos,pip,homebrew,pyaudio | You can try export MACOSX_DEPLOYMENT_TARGET='desired value' in Terminal just before you run the installation process. | 0 | 1 | 0 | 0 | 2016-03-02T14:39:00.000 | 2 | 1.2 | true | 35,750,276 | 0 | 0 | 0 | 1 | I used brew to install port audio.
I then tried pip install pyaudio.
I get:
error: $MACOSX_DEPLOYMENT_TARGET mismatch: now "10.9" but "10.11" during configure
How can I set the MACOSX_DEPLOYMENT_TARGET so that I don't get this error? |
Conversion from UNIX time to timestamp starting in January 1, 2000 | 35,763,677 | 3 | 9 | 13,703 | 0 | python,datetime,unix,timestamp,epoch | Well, there are 946684800 seconds between 2000-01-01T00:00:00Z and 1970-01-01T00:00:00Z. So, you can just set a constant for 946684800 and add or subtract from your Unix timestamps.
The variation you are seeing in your numbers has to do with the delay in sending and receiving the data, and could also be due to clock synchronization, or lack thereof. Since these are whole seconds, and your numbers are 3 to 4 seconds off, then I would guess that the clocks between your computer and your device are also 3 to 4 seconds out of sync. | 0 | 1 | 0 | 1 | 2016-03-03T04:42:00.000 | 5 | 0.119427 | false | 35,763,357 | 0 | 0 | 0 | 1 | I am trying to interact with an API that uses a timestamp that starts at a different time than UNIX epoch. It appears to start counting on 2000-01-01, but I'm not sure exactly how to do the conversion or what the name of this datetime format is.
When I send a message at 1456979510 I get a response back saying it was received at 510294713.
The difference between the two is 946684796 (sometimes 946684797) seconds, which is approximately 30 years.
Can anyone let me know the proper way to convert between the two? Or whether I can generate them outright in Python?
Thanks
Edit
An additional detail I should have mentioned is that this is an API to a Zigbee device. I found the following datatype entry in their documentation:
1.3.2.7 Absolute time
This is an unsigned 32-bit integer representation for absolute time. Absolute time is measured in seconds
from midnight, 1st January 2000.
I'm still not sure the easiest way to convert between the two |
How can I remotely update a python script | 35,783,966 | 0 | 0 | 214 | 0 | python | A very good solution would be to build a web app. You can use django, bottle or flask for example.
Your users just connect to your url with a browser. You are in complete control of the code, and can update whenever you want without any action on their part.
They also do not need to install anything in the first place, and browser nowadays provide a lot of flexibility and dynamic content. | 0 | 1 | 0 | 0 | 2016-03-03T21:59:00.000 | 1 | 0 | false | 35,783,883 | 1 | 0 | 0 | 1 | How can I update a python script remotely. I have a program which I would like to share, however it will be frequently updates, therefore I want to be able to remotely update it so that the users do not have to re-install it every day. I have already searched StackOverflow for an answer but I did not find anything I could understand. Any help will be mentioned in the projects credit! |
Managing OpenCV python Cmake Studio Windows | 35,806,266 | 1 | 0 | 102 | 0 | python,visual-studio,opencv,cmake,cmake-gui | You should look for Python related variables in the CMake GUI. There may be some variables you could set to force paths to the python2.7 interpreter, libs and include dirs. | 1 | 1 | 0 | 0 | 2016-03-04T20:58:00.000 | 1 | 0.197375 | false | 35,805,904 | 0 | 0 | 0 | 1 | Recently , I wanted to install OpenCV (in Win10 64 bit ) using Cmake 3.5.0-rc3 and Visual studio 2015 . I have python 3.5 as root and 2.7 as python2 . The issue is while configuring it recognizes python 3.5 as main interpreter but i want it to be 2.7.Is there a possible way to make cmake recognize 2.7 as my main python while maintaining python 3,5 in my PC . I can probably do it by deleting python 3.5 but i dont want that . Help is very much appreciated.Thanking you ,
P.S. If there is a simpler way to install OpenCV along with extramodules in WIndows ,please do tell me thanks in advance |
Python hide already printed text | 35,813,975 | 0 | 1 | 6,322 | 0 | python,python-3.x,new-window,tui | You may not like this, it's a bit higher level than a basic two player board game, but there is always using some sort of GUI.
I personally like tkinter myself.
You don't want the option of people scrolling up to see printed text, but you can't remove what has been printed, that's like asking a printer to remove ink off a page. It's going to stay there.
Research a GUI interface, and try and make the game in that. Otherwise, you could let me take a stab at creating a explanatory piece of code that shows you how to use tkinter. If you do, link me the game you have so I can understand what you want. | 0 | 1 | 0 | 0 | 2016-03-05T11:43:00.000 | 4 | 0 | false | 35,813,667 | 1 | 0 | 0 | 1 | I'm creating a simple two-player board game where each player must place pieces on their own boards. What I would like to do is by either:
opening a new terminal window (regardless which OS the program is run on) for both players so that the board is saved within a variable but the other player cannot scroll up to see where they placed their pieces.
clearing the current terminal completely so that neither player could scroll and see the other player's board. I am aware of the unix 'clear' command but it doesn't achieve the effect I'm after and doesn't work with all OS's (though this might be something that I'll have to sacrifice to get a working solution)
I have tried clearing the screen but haven't been able to completely remove all the text. I don't have a preference; whichever method is easier. Also, if it would be easier to use a different method that I haven't thought of, all other suggestions are welcome. Thanks in advance!
EDIT: Other solutions give the appearance that text has been cleared but a user could still scroll up and see the text that was cleared. I'd like a way to remove any way that a user could see this text.
EDIT 2: Please read the other answers and the comments as they provide a lot of information about the topic as a whole. In particular, thanks to @zondo. |
Run python program on startup in background on Intel Galileo | 35,972,324 | 1 | 2 | 259 | 0 | python,linux,startup,intel-galileo | I made the myprogram.py run in background with python myprogram.py & and it worked. The & is used to run whatever process you want in background. | 0 | 1 | 0 | 1 | 2016-03-05T18:26:00.000 | 1 | 1.2 | true | 35,818,003 | 0 | 0 | 0 | 1 | I have a python program that is an infinity loop and send some data to my database.
I want this python script to run when I power my Intel Galileo. I tried to make a sh script python myprogram.py and made it run on startup in etc/init.d. When I restarted my Galileo, nothing happened-Linux didn't load, Arduino sketch didn't load and even my computer didn't recognize it.
I guess this happened because the python program was an infinity loop.
Is there a way that I can run my system without problems and run my python script on startup? |
Choosing between Python 2(.7.x) and Python 3(.5.x) | 35,860,000 | 1 | 0 | 102 | 0 | python,linux,scripting | use python 3. the number of packages that don't support python 3 is shrinking every day, and the vast majority of large/important frameworks out there already support both. there are even some projects which have dropped python 2 entirely, albeit those tend not to be large (since enterprise inertia tends to hold projects back).
starting a new project today on python 2, especially as a beginner, is just opening yourself to more pain imo than running into a package that doesn't support python 3.
considering the versatility of python and the size of the vibrant python community, there are often multiple packages that solve the same problem. that means even if you find one that doesn't support python 3, it's often possible to find a similar project that does support python 3.
once you get confident enough w/python 3, and you do run into a package that only supports python 2, you always have the source and can start contributing patches back! :D | 0 | 1 | 0 | 0 | 2016-03-08T05:07:00.000 | 3 | 0.066568 | false | 35,859,441 | 1 | 0 | 0 | 1 | I am a Java developer with more than 10 years of experience.
I started using python few months back when I had a requirement to create a script which pulls data from a REST service and then generates a report using this data. The fact that python is a multi purpose language (scripting, web applications, REST services etc) coupled with very fast development speed has ignited a deep interested of mine in this language. In fact this is the only language I use when I am in Linux world.
Currently I am trying to port my (powershell/shell) automation scripts, developed for fully automating release process of Piston (an open source Java based micro portal technology), to python. However a major challenge in front of me is which version (2 or 3) of python should I use? Ideally I would prefer 3 as I believe this has many improvements over version 2 and I would like to use this version of all the new development. However my concern is there could be some packages which may not a version for python 3 yet. This is what has been mentioned on python.org site too -
However, there are some key issues that may require you to use Python 2 rather than Python 3.
Firstly, if you're deploying to an environment you don't control, that may impose a specific version, rather than allowing you a free selection from the available versions.
Secondly, if you want to use a specific third party package or utility that doesn't yet have a released version that is compatible with Python 3, and porting that package is a non-trivial task, you may choose to use Python 2 in order to retain access to that package.
One popular module that don't yet support Python 3 is Twisted (for networking and other applications). Most actively maintained libraries have people working on 3.x support. For some libraries, it's more of a priority than others: Twisted, for example, is mostly focused on production servers, where supporting older versions of Python is important, let alone supporting a new version that includes major changes to the language. (Twisted is a prime example of a major package where porting to 3.x is far from trivial.)
So I don't want to be in a situation where there is a package which I think can be very useful for my automation scripts but does not have a version for python 3. |
how to manually install the locally compiled python library (shared python library) to system? | 35,872,702 | 0 | 0 | 170 | 0 | python,linux,ubuntu,installation,environment-variables | Try installing the 2.7 version : apt-get install python2.7-dev | 0 | 1 | 0 | 0 | 2016-03-08T16:24:00.000 | 1 | 0 | false | 35,872,623 | 1 | 0 | 0 | 1 | I have the newest version of python (2.7.11) installed on my home director. To compile the YouCompleteMe plugin, I need the python-dev to be installed. However, the global python of my environment is 2.7.11, which means that if I install python-dev via apt-get, it would incompatible with python 2.7.11, because it is used for python 2.6.
I re-compiled python 2.7.11 with --enable-shared flag, but failed to know how to add its lib and header files to system's default search path (if there exist such a path environment variable).
So, my question is, how to manually install the locally compiled python library to system? |
How to make a Python script run a powershell script after it executes | 35,894,316 | 3 | 0 | 163 | 0 | python | Instead of calling the Powershell script from inside the Python script, you should run both the scripts using the task scheduler itself.
Assuming that the command you gave to the scheduler was something like python script.py, you should change it to cmd_script.cmd where the contents of the cmd_script.cmd would be python script.py & powershell.exe script.ps1 | 0 | 1 | 0 | 0 | 2016-03-09T14:19:00.000 | 1 | 1.2 | true | 35,894,125 | 0 | 0 | 0 | 1 | I have two scripts one is Python based and other is powershell based.
My requirement is that I need to first run the Python script and then the powershell script on startup.
Using Task Scheduler I can run the Python script but I need to find a way to run powershell script after the python script finishes.
Some research online shows that I can add something like:
os.system ("powershell.exe script.ps1") in my Python script
but that is throwing an error: (unicode error) 'unicodeescape' codec can't decode bytes in position.....
Any suggestions? |
Windows Task Scheduler running Python script: how to prevent taskeng.exe pop-up? | 35,901,175 | 5 | 3 | 3,605 | 0 | python,windows,scheduled-tasks | Simply save your script with .pyw extension.
As far as I know, .pyw extension is the same as .py, only difference is that .pyw was implemented for GUI programs and therefore console window is not opened.
If there is more to it than this I wouldn't know, perhaps somebody more informed can edit this post or provide their own answer. | 0 | 1 | 0 | 0 | 2016-03-09T19:15:00.000 | 2 | 1.2 | true | 35,900,622 | 0 | 0 | 0 | 1 | Windows 7 Task Scheduler is running my Python script every 15 minutes. Command line is something like c:\Python\python.exe c:\mypath\myscript.py. It all works well, script is called every 15 minues, etc.
However, the task scheduler pops up a huge console window titled taskeng.exe every time, blocking the view for a few seconds until the script exits.
Is there a way to prevent the pop-up? |
Run python3 in atom with atom-runner | 35,900,933 | 0 | 0 | 4,265 | 0 | python,windows,python-3.x,atom-editor | Right click the start menu, and select System. Then, hit "Advanced system settings" > "Environment Variables". Click on path, and hit edit. Select "New" and add the folder that your python executable is in. That should fix the problem.
Your other option is to reinstall python and select "add PYTHON to PATH" as Carpetsmoker suggested. | 0 | 1 | 0 | 1 | 2016-03-09T19:16:00.000 | 2 | 1.2 | true | 35,900,628 | 0 | 0 | 0 | 2 | I am trying to run simple python code in atom using atom-runner package, but I am getting following error:
Unable to find command: python
Are you sure PATH is configured correctly?
How can I configure PATH. (path to my python is C:\Python34) |
Run python3 in atom with atom-runner | 37,861,176 | 0 | 0 | 4,265 | 0 | python,windows,python-3.x,atom-editor | If this does not work guys uninstall Python and Atom. While reinstalling Python make sure you click on "Add Python to Path" so you will not have any problems with setting the paths at all! | 0 | 1 | 0 | 1 | 2016-03-09T19:16:00.000 | 2 | 0 | false | 35,900,628 | 0 | 0 | 0 | 2 | I am trying to run simple python code in atom using atom-runner package, but I am getting following error:
Unable to find command: python
Are you sure PATH is configured correctly?
How can I configure PATH. (path to my python is C:\Python34) |
cx_Freeze not working - no module named cx_Freeze | 51,223,723 | 0 | 1 | 4,771 | 0 | python,python-3.x,cx-freeze | Copy to the directory of the file to compile the following:
re.py
sre_compile.py
sre_constants.py
sre_parse.py
from "...\Lib"
and build: python <nameFileToBuild>.py build | 0 | 1 | 0 | 0 | 2016-03-10T00:42:00.000 | 2 | 0 | false | 35,905,521 | 1 | 0 | 0 | 1 | I've been trying to compile a game I'm writing with python into an exe with cx_Freeze so my friends can play it without the python interpreter. However, when I run the "build" command through cmd, I get an error saying "ImportError: No module named 'cx_Freeze'". I've done this every way in and out, changing the capital letters in "cx_Freeze". I'm trying to use 3.4.3/3.5.1, and I'm using cx_Freeze version 4.3.4.
Thanks in advance...
in answer to Loïc's comment: yes, it is installed. |
how to identify jobs running background on Windows 7-10? | 35,906,151 | 0 | 1 | 293 | 0 | python,windows,powershell,windows-7 | Not actually a programming question, but:
In Task Manager's Process page, choose View > Select Columns and add the Command Line column. Then you can see the actual command line for each process and you should be able to track down the ones you're interested in.
This is for Windows 7; I know they made some changes to the Task Manager for Windows 10 but don't have access to a Windows 10 machine at the moment. | 0 | 1 | 0 | 0 | 2016-03-10T01:43:00.000 | 1 | 1.2 | true | 35,906,090 | 1 | 0 | 0 | 1 | I want to run a Python process in background, and I use the following command in PowerShell.
powershell > PowerShell.exe -windowstyle hidden python my_process.py
But, How can I know whether it is running in background? The task manager can not show a process named python my_process.py that running in background, and I don't know the process id on task manager, it just show some python and powershell processes running in background. I can not identify which process is my Python process. |
Uninstall Python 2.7 from Mac OS X El Capitan | 35,922,700 | 1 | 6 | 24,535 | 0 | python,macos,python-2.7 | Set your an alias to use the python version that you want to use from inside your .bashrc (or zsh if you use it).
Like:
alias python='/usr/bin/python3.4' | 0 | 1 | 0 | 0 | 2016-03-10T16:53:00.000 | 3 | 0.066568 | false | 35,922,553 | 1 | 0 | 0 | 1 | I want to completely reinstall Python 2 but none of the guides I have found allow me to uninstall it. No matter what I do, python --version still returns 2.7.10, even after I run the Python 2.7.11 installer. All the other guides on StackOverflow tell me to remove a bunch of files, but python is still there. |
Pass parameter through shell to python | 35,935,344 | 0 | 0 | 373 | 0 | android,python,shell,qpython | I don't have experience in Android programming, so I can only give a general recommendation:
Of course the naive solution would be to explicitly pass the arguments from script to script, but I guess you can't or don't want to modify the scripts in between, otherwise you would not have asked.
Another approach, which I sometimes use, is to define an environment variable in the outermost scripts, stuff all my parameters into it, and parse it from Python.
Finally, you could write a "configuration file" from the outermost script, and read it from your Python program. If you create this file in Python syntax, you even spare yourself from parsing the code. | 1 | 1 | 0 | 1 | 2016-03-10T21:58:00.000 | 2 | 1.2 | true | 35,928,155 | 0 | 0 | 0 | 2 | I run python in my Android Terminal and want to run a .py file with:
python /sdcard/myScript.py
The problem is that python is called in my Android enviroment indirect with a shell in my /system/bin/ path (to get it direct accessable via Terminal emulator).
My exact question, how the title tells how to pass parameter through multiple Shell scripts to Python?
My direct called file "python" in /System/bin/ contains only a redirection like:
sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh
and so on to call python binary.
Edit:
I simply add the $1 parameter after every shell, Python is called through like:
sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh $1
so is possible to call
python /sdcard/myScript.py arg1
and in myScript.py as usual fetch with sys.argv
thanks |
Pass parameter through shell to python | 36,178,959 | 0 | 0 | 373 | 0 | android,python,shell,qpython | I have similar problem. Runing my script from Python console
/storage/emulator/0/Download/.last_tmp.py -s && exit
I am getting "Permission denied". No matter if i am calling last_tmp or edited script itself.
Is there perhaps any way to pass the params in editor? | 1 | 1 | 0 | 1 | 2016-03-10T21:58:00.000 | 2 | 0 | false | 35,928,155 | 0 | 0 | 0 | 2 | I run python in my Android Terminal and want to run a .py file with:
python /sdcard/myScript.py
The problem is that python is called in my Android enviroment indirect with a shell in my /system/bin/ path (to get it direct accessable via Terminal emulator).
My exact question, how the title tells how to pass parameter through multiple Shell scripts to Python?
My direct called file "python" in /System/bin/ contains only a redirection like:
sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh
and so on to call python binary.
Edit:
I simply add the $1 parameter after every shell, Python is called through like:
sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh $1
so is possible to call
python /sdcard/myScript.py arg1
and in myScript.py as usual fetch with sys.argv
thanks |
python - print to stdout and redirect output to file | 35,941,617 | 0 | 2 | 838 | 0 | python,redirect,stdout | You want to use 'tee'. stdbuf -oL python mycode.py | tee out.txt | 0 | 1 | 0 | 0 | 2016-03-11T13:28:00.000 | 2 | 0 | false | 35,941,506 | 0 | 0 | 0 | 1 | I can run my python scripts on the terminal and get the print results on the stdout e.g.
python myprog.py
or simply redirect it to a file:
python myprog.py > out.txt
My question is how could I do both solutions at the same time.
My linux experience will tell me something like:
python myprog.py |& tee out.txt
This is not having the behaviour I expected, print on the fly and not all at once when the program ends.
So what I wanted (preferred without changing python code) is the same behavior as python myprog.py (print on the fly) but also redirecting output to a file.
What is the simplest way to accomplish this? |
How to install 'adium-theme-ubuntu' (virtualenv) | 39,074,990 | 2 | 4 | 2,119 | 0 | python,ubuntu | I might have a similar issue here.
Got the same error while trying to import a requirement.txt into a virtualenv. something like "No matching distribution found for adium-theme-ubuntu==0.3.4"
Solved it by include --system-site-packages when creating the virtualenv.
Hope it helps | 0 | 1 | 0 | 0 | 2016-03-11T14:12:00.000 | 1 | 0.379949 | false | 35,942,424 | 1 | 0 | 0 | 1 | I'm working on a Appium Python test script for AWS Device Farm. I get error while building the script as;
Could not find any downloads that satisfy the requirement package-name (like PAM, Twisted-Core etc)
I've already solved almost all of them, but still have problem with adium-theme-ubuntu.
This package is already installed on my system and virtualenv, but I still get same error for this package.
How should I solve this issue?
Thank you in advance |
Wingware Python IDE: How do I change from Python 2.7.10 to the latest version? | 35,998,142 | 0 | 0 | 882 | 0 | python,ide,version | Install latest Python. Go to your Project menu and Project Properties. Change the Python Executable to use Python 3.5 or whatever. Press OK. You might need to restart Wing's Python Shell, but other than that, you should be set.
If you want all projects to default to the latest, you will have to set up your OS to default to the latest Python. Depending on the OS, you may have to fiddle around in some settings dialogs or just uninstall the old version. However, be careful when uninstalling Python on Linux as if you happen to uninstall the system Python, your OS may become non-functional. | 0 | 1 | 0 | 0 | 2016-03-12T10:45:00.000 | 1 | 0 | false | 35,956,650 | 1 | 0 | 0 | 1 | How do I change from Python 2.7.10 to the latest version in Wingware Python IDE? |
Unable to install python-recsys module | 36,860,451 | 2 | 1 | 490 | 0 | python-3.x | Recsys is not supported by python 3.X but only python 2.7. | 0 | 1 | 0 | 0 | 2016-03-12T11:08:00.000 | 1 | 0.379949 | false | 35,956,868 | 1 | 0 | 0 | 1 | I am trying to install python-recsys module. But i get this error
Could not find a version that satisfies the requirement python-recsys
(from versions: ) No matching distribution found for python-recsys
I am using Python 3.4
The code that i am using to install the module is:
pip.exe install python-recsys |
Non ascii file name issue with os.walk | 35,959,633 | 8 | 2 | 1,175 | 0 | python,python-2.7 | Listing directories using a bytestring path on Windows produces directory entries encoded to your system locale. This encoding (done by Windows), can fail if the system locale cannot actually represent those characters, resulting in placeholder characters instead. The underlying filesystem, however, can handle the full unicode range.
The work-around is to use a unicode path as the input; so instead of os.walk(r'C:\Foo\bar\blah') use os.walk(ur'C:\Foo\bar\blah'). You'll then get unicode values for all parts instead, and Python uses a different API to talk to the Windows filesystem, avoiding the encoding step that can break filenames. | 0 | 1 | 0 | 0 | 2016-03-12T15:30:00.000 | 1 | 1.2 | true | 35,959,580 | 1 | 0 | 0 | 1 | I am using os.walk to traverse a folder. There are some non-ascii named files in there. For these files, os.walk gives me something like ???.txt. I cannot call open with such file names. It complains [Errno 22] invalid mode ('rb') or filename. How should I work this out?
I am using Windows 7, python 2.7.11. My system locale is en-us. |
Installing iPython Notebook - opening a $HOME file from editor | 35,972,367 | 1 | 1 | 48 | 0 | ipython-notebook,jupyter-notebook | maybe it's not there? you can create it first, in Mac's terminal
touch $HOME/.ipython/profile_default/ipython_notebook_config.py
and then open it in TextWranggler
open -a /Applications/TextWrangler.app $HOME/.ipython/profile_default/ipython_notebook_config.py | 0 | 1 | 0 | 0 | 2016-03-13T15:37:00.000 | 1 | 1.2 | true | 35,972,196 | 1 | 0 | 0 | 1 | I am attempting to install ipython notebook based on some instructions. However, while I tried to execute this 'In your favorite editor, open the file $HOME/.ipython/profile_default/ipython_notebook_config.py', I can't really open a file from TextWrangler. I am not familiar with this. Could anyone help me out there? Thank you very much!! |
Using Scikit-learn google app engine | 52,134,247 | 0 | 8 | 1,664 | 0 | python,python-2.7,google-app-engine,scikit-learn | The newly-released 2nd Generation Python 3.7 Standard Environment (experimental) can run all modules. It's still in beta, though. | 0 | 1 | 0 | 0 | 2016-03-14T12:39:00.000 | 3 | 0 | false | 35,987,785 | 0 | 1 | 0 | 1 | I am trying to deploy a python2.7 application on google app engine. It uses few modules like numpy,flask,pandas and scikit-learn. Though I am able to install and use other modules. Installing scikit-learn in lib folder of project give following error:-
Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 240, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler handler, path, err = LoadObject(self._handler) File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 85, in LoadObject obj = __import__(path[0]) File "/base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/deploynew.py", line 6, in import sklearn File "/base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/lib/sklearn/__init__.py", line 56, in from . import __check_build File "/base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/lib/sklearn/__check_build/__init__.py", line 46, in raise_build_error(e) File "/base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/lib/sklearn/__check_build/__init__.py", line 41, in raise_build_error %s""" % (e, local_dir, ''.join(dir_content).strip(), msg)) ImportError: dynamic module does not define init function (init_check_build) ___________________________________________________________________________ Contents of /base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/lib/sklearn/__check_build: setup.pyc __init__.py _check_build.so setup.py __init__.pyc ___________________________________________________________________________ It seems that scikit-learn has not been built correctly. If you have installed scikit-learn from source, please do not forget to build the package before using it: run python setup.py install or make in the source directory. If you have used an installer, please check that it is suited for your Python version, your operating system and your platform.
Is their any way of using scikit-learn on google app engine? |
why is my program using system cpu time? | 36,002,334 | 1 | 2 | 215 | 0 | python,fortran,profiling,cpu-usage,lapack | Operating systems are constantly switching out what is running at any given moment. Your program will run for a while, but eventually there will be an interrupt, and the system will switch to something else, or it may just decide to run something else for a second or two, then switch back. It is difficult to force an operating system not to do this behavior. That's part of the job of the OS; keeping things moving in all areas. | 0 | 1 | 0 | 0 | 2016-03-14T13:40:00.000 | 1 | 1.2 | true | 35,989,119 | 1 | 0 | 0 | 1 | I wrote a time-consuming python program. Basically, the python program spends most of its time in a fortran routine wrapped by f2py and the fortran routine spends most of its time in lapack. However, when I ran this program in my workstation, I found 80% of the cpu time was user time and 20% of cpu time was system time.
In another SO question, I Read:
The difference is whether the time is spent in user space or kernel space. User CPU time is time spent on the processor running your program's code (or code in libraries); system CPU time is the time spent running code in the operating system kernel on behalf of your program.
So if this is true, I assume all the cpu time should be devoted to user time. Does 20% percent system time indicate I need to profile the program?
EDIT:
More information: I cannot reproduce the 20% percent system cpu time. In another run, the time command gives:
real 5m14.804s
user 78m6.233s
sys 4m53.896s |
How to determine what version of python3 tkinter is installed on my linux machine? | 35,999,383 | 4 | 27 | 40,994 | 0 | python-3.x,tkinter | Run Tkinter.TclVersion or Tkinter.TkVersion and if both are not working, try Tkinter.__version__ | 1 | 1 | 0 | 0 | 2016-03-14T22:30:00.000 | 6 | 0.132549 | false | 35,999,344 | 0 | 0 | 0 | 2 | Hi have been scavenging the web for answers on how to do this but there was no direct answer. Does anyone know how I can find the version number of tkinter? |
How to determine what version of python3 tkinter is installed on my linux machine? | 63,566,647 | 5 | 27 | 40,994 | 0 | python-3.x,tkinter | Type this command on the Terminal and run it.
python -m tkinter
A small window will appear with the heading tk and two buttons: Click Me! and QUIT. There will be a text that goes like This is Tcl/Tk version ___. The version number will be displayed in the place of the underscores. | 1 | 1 | 0 | 0 | 2016-03-14T22:30:00.000 | 6 | 0.16514 | false | 35,999,344 | 0 | 0 | 0 | 2 | Hi have been scavenging the web for answers on how to do this but there was no direct answer. Does anyone know how I can find the version number of tkinter? |
Is there a way to determine how long has an Amazon AWS EC2 Instance been running for? | 36,037,353 | 4 | 4 | 3,416 | 0 | python,amazon-web-services,amazon-ec2,cron,aws-cli | The EC2 service stores a LaunchTime value for each instance which you can find by doing a DescribeInstances call. However, if you stop the instance and then restart it, this value will be updated with the new launch time so it's not really a reliable way to determine how long the instance has been running since it's original launch.
The only way I can think of to determine the original launch time would be to use CloudTrail (assuming you have it enabled for your account). You could search CloudTrail for the original launch event and this would have an EventTime associated with it. | 0 | 1 | 0 | 1 | 2016-03-15T18:21:00.000 | 2 | 0.379949 | false | 36,019,161 | 0 | 0 | 1 | 1 | I am looking for a way to programmatically kill long running AWS EC2 Instances.
I did some googling around but I don't seem to find a way to find how long has an instance been running for, so that I then can write a script to delete the instances that have been running longer than a certain time period...
Anybody dealt with this before? |
Is there a way to install cx_oracle without root access in linux environment? | 36,050,958 | 1 | 0 | 1,472 | 0 | python,linux,python-2.7,cx-oracle | Yes, you can simply follow these steps:
Download the source archive and unpack it somewhere.
Run the command "python setup.py build"
Copy the library to a location of your choice where you do have access (or you can simply leave it in the build location, too, if you prefer)
Set the environment variable PYTHONPATH to point to the location of cx_Oracle.so | 0 | 1 | 0 | 0 | 2016-03-16T15:01:00.000 | 3 | 0.066568 | false | 36,039,397 | 0 | 0 | 0 | 2 | Im trying to install cx_oracle with python2.7.11. all the tutorials i found for installing cx_oracle needs root access, however on the vm i dont have root access on the /usr or /etc folders. Is there any way to install cx_oracle in my user directory? |
Is there a way to install cx_oracle without root access in linux environment? | 36,051,557 | 0 | 0 | 1,472 | 0 | python,linux,python-2.7,cx-oracle | You a Python virtual environment - this way you do not ever need to use System Privs for adding new functionality to your Python Dev Environment.
Look for the command pyvenv - there is lots of info on this. | 0 | 1 | 0 | 0 | 2016-03-16T15:01:00.000 | 3 | 0 | false | 36,039,397 | 0 | 0 | 0 | 2 | Im trying to install cx_oracle with python2.7.11. all the tutorials i found for installing cx_oracle needs root access, however on the vm i dont have root access on the /usr or /etc folders. Is there any way to install cx_oracle in my user directory? |
do you want ssh for your instance? | 36,051,700 | 2 | 0 | 35 | 0 | python,django,amazon-web-services,ssh | The advice against using ssh comes from a well-meaning bunch that wants us all to create reproducible configurations that don't require administrators to login in order to periodically tweak the config. It also becomes another interface that must be secured.
Ideally, everything you need is independent of ssh because you are providing some internet-accessible service like a webserver/database/etc.
If your process isn't that mature yet, it's acceptable to enable ssh, but you should strive towards not needing it. | 0 | 1 | 1 | 0 | 2016-03-17T04:17:00.000 | 1 | 0.379949 | false | 36,051,581 | 0 | 0 | 0 | 1 | as I'm using aws I'm looking into two tutorials just to make sure I do it right. and I'm at a step to do eb init, and it asks "do you want ssh for your instance?" one tutorial says I should say yes and the other says i should put no...isn't ssh the one that connects my laptop with amazon's network shouldn't I put yes for this?but why does the tutorial say I should put no |
Why does Python's IDLE crash when I type a parenthesis on Mac? | 47,341,459 | 1 | 1 | 1,732 | 0 | python,macos,tkinter,crash,python-idle | I found a fix! One that doesn't require changing monitor settings.
In IDLE:
Options Menu > Configure Extensions > CallTips > set to FALSE
Then restart.
Took much research to find that super simple solution... the problem is caused not by an error in IDLE but by an error in the mac's Tcl/Tk code when calltips are called in external monitors above the default monitor. | 0 | 1 | 0 | 0 | 2016-03-17T06:28:00.000 | 3 | 0.066568 | false | 36,053,119 | 1 | 0 | 0 | 1 | Ok, I realize this may be an extremely nuanced question, but it has been bugging me for a while. I like the simple scripting interface of IDLE, but it keeps crashing on me when: (1) I am coding on an external monitor and (2) I type the parenthesis button, "(". IDLE never crashes for me for any other reason than this very specific situation. Strangely, if I have an external monitor connected, but I have the IDLE dev window on my laptop's main screen, I have ZERO problems with crashing. (???) I have lost a substantial amount of code due to this problem.
I am running on Mac OSX Version 10.11.3 and I have a MacBook Pro (Retina, 15-inch, Mid 2015) Any thoughts would be appreciated! |
How do I set Anaconda's python as my default python command? | 36,079,150 | 20 | 18 | 63,145 | 0 | python-2.7,anaconda | Your PATH is pointing to the original Python executable. You have to update your PATH.
(Assuming Windows 7)
Right-click on Computer, the Properties, the Advanced system settings, then click the Environment Variables... button.
The lower window has the system variables. Scroll down until you find Path, select it, and click edit. In the screen that appears, update the path that is pointing to your original python.exe to the one that is in the anaconda path.
Close any open command window for update to take effect. | 0 | 1 | 0 | 0 | 2016-03-18T02:35:00.000 | 2 | 1.2 | true | 36,075,166 | 1 | 0 | 0 | 1 | I have both Anaconda and Python 2.7 installed on my Windows machine. Right now the command "python" points to Python 2.7, but I'd like instead for it to point to Anaconda's python. How do I set this up? |
How to create controlled CPU load on Linux? | 36,087,597 | 0 | 1 | 151 | 0 | python,linux,cpu | Maybe try to to use time.sleep() and play around with how long to sleep in between calculations? | 0 | 1 | 0 | 0 | 2016-03-18T14:53:00.000 | 1 | 0 | false | 36,087,488 | 0 | 0 | 0 | 1 | I'd like to know how to generate a controlled CPU load on Linux using shell or python script. By controlled load I mean creating a process that consumes a specified amount of CPU cycles (e.g., 20% of available CPU cycles).
I wrote a python script that does some dummy computation like generating N random integers and sort them using the built-in sort function. I used "time" utility in Linux to compute the User and Kernel time consumed by the process. But I am not sure how to compute the CPU utilization of the specific process from CPU time.
Thanks. |
Flask-Babel won't translate text on AWS within a docker container, but does locally | 36,165,474 | 0 | 0 | 163 | 0 | python,amazon-web-services,flask,docker,docker-compose | I found the problem.
Locally i am running it on a vagrant virtual machine on a windows computer. It looks like because windows is not a case sensitive file system, when the python gettext() function was looking for en_US, i was passing it en_us, which it found on windows. But on AWS, it did not because it was running linux which is case sensitive. | 0 | 1 | 0 | 0 | 2016-03-19T01:11:00.000 | 1 | 1.2 | true | 36,096,703 | 0 | 0 | 1 | 1 | I have a flask app that is using flask-babel to translate text. I have created a docker container for it all to run in. And i have verified multiple times that both are being run and built exactly the same way.
When i put the app on my local docker container (using a vagrant linux machine). The translations work fine. When i put it on AWS, the translations do not work, and they simply show the msgid text. So things like "website_title" etc. instead of the correct localized text.
This is really weird to me because everything is running EXACTLY the same and inside of docker containers, so there shouldn't be anything different about them.
If needed i can post some code snippets with sensitive stuff edited out, but i was more hoping for someone to point me in a general direction on why this might be happening or how to even debug it. As far as i can tell there are no errors being logged anywhere. |
Optimizing Celery for third party HTTP calls | 36,731,024 | 5 | 9 | 894 | 0 | python,rabbitmq,celery,scalability | First of all - GIL - that should not be a case, since more machines should go faster. But - please check if load goes only on one core of the server...
I'm not sure if whole Celery is good idea in your case. That is great software, with a lot of functionality. But, if that is not needed, it is better to use something simpler - just in case some of that features interfere. I would write small PoC, check other client software, like pika. If that would not help - problem is with infrastructure. If helps - you have solution. :)
It is really hard to tell what is going on. It can be something with IO, or too many network calls... I would step back - to find out something working. Write integration tests, but be sure to use 2-3 machines just to use full tcp stack. Be sure to have CI, and run that tests once a day, or so - to see if things are going in right direction. | 0 | 1 | 0 | 0 | 2016-03-19T19:33:00.000 | 1 | 1.2 | true | 36,106,216 | 0 | 0 | 0 | 1 | We are using celery to make third party http calls. We have around 100+ of tasks which simply calls the third party HTTP API calls. Some tasks call the API's in bulk, for example half a million requests at 4 AM in morning, while some are continuous stream of API calls receiving requests almost once or twice per second.
Most of API call response time is between 500 - 800 ms.
We are seeing very slow delivery rates with celery. For most of the above tasks, the max delivery rate is around 100/s (max) to almost 1/s (min). I believe this is very poor and something is definitely wrong, but I am not able to figure out what it is.
We started with cluster of 3 servers and incrementally made it a cluster of 7 servers, but with no improvement. We have tried with different concurrency settings from autoscale to fixed 10, 20, 50, 100 workers. There is no result backend and our broker is RabbitMQ.
Since our task execution time is very small, less than a second for most, we have also tried making prefetch count unlimited to various values.
--time-limit=1800 --maxtasksperchild=1000 -Ofair -c 64 --config=celeryconfig_production
Servers are 64 G RAM, Centos 6.6.
Can you give me idea on what could be wrong or pointers on how to solve it?
Should we go with gevents? Though I have little of idea of what it is. |
Installing cPickle fails | 36,156,896 | 4 | 1 | 7,478 | 0 | python-2.7,pickle | Actually, I tried importing cPickle and it worked. But I don't know why the error "could not find any downloads that satisfy the requirement cPickle". I will appreciate if someone can provide reason.
Also as commented by mike cleared my doubt.
"cPickle is installed when python itself is installed -- it's part of the standard library. So, just import cPickle and it should be there" | 0 | 1 | 0 | 0 | 2016-03-22T13:55:00.000 | 1 | 1.2 | true | 36,156,353 | 1 | 0 | 0 | 1 | I am working on ubuntu 14.04. Python version: 2.7.6.
I am trying to install cPickle, but I am getting error:
"could not find any downloads that satisfy the requirement cPickle"
I tried via, pip and apt-get as well. What might be reason, has this package been removed completely? |
How to set application path in music21 | 36,364,217 | 2 | 2 | 1,301 | 0 | python,linux,anaconda,midi,music21 | First of all, are you sure you have a midi player?
Timidity is a good option. Check if you have it installed, and if you doesn't, just use sudo apt-get install timidity
Once installed, the path you need should be '/usr/bin/timidity' | 0 | 1 | 0 | 0 | 2016-03-22T22:31:00.000 | 1 | 0.379949 | false | 36,166,485 | 0 | 0 | 1 | 1 | I'm using Ubuntu 14.04 64bit.
I don't know what to set on path to application.
I have installed music21 in anaconda3, but I got output as follows:
music21.converter.subConverters.SubConverterException: Cannot find a valid application path for format midi. Specify this in your Environment by calling environment.set(None, 'pathToApplication')
What application should I choose? I've seen a lot of pages but no one tells me what to set. |
How to change Python default compiler to GCC? | 36,189,588 | 2 | 1 | 3,595 | 0 | python,gcc,theano | Edit Distutils config file C:\Python2.7\Lib\distutils\distutils.cfg (Create the file if it already does not exist).
Add the following to the file:
[build]
compiler = mingw32
This should work. | 0 | 1 | 0 | 0 | 2016-03-23T21:55:00.000 | 1 | 0.379949 | false | 36,189,453 | 1 | 0 | 0 | 1 | I have Windows 10 and Python 2.7 installed. When I run IDLE I find this:
Python 2.7.10 (default, Oct 14 2015, 16:09:02)
[MSC v.1500 32 bit (Intel)]
I want the default compiler here to be MinGW's GCC (I already installed MinGW) becaue I cannot import Theano with the MSC compiler
I tried all the tutorials out there and every time I successfully install Theano but when I try to import it I get the error "Problem occurred during compilation with the command line below:" and I get a huge list of errors. Btw, I don't have VS installed on my system |
How can I run my own shell from elisp? | 36,212,585 | 0 | 0 | 154 | 0 | python,shell,emacs,elisp | Emacs has a number of different ways to interact with external program. From your text, I suspect you need to look at comint in the emacs manual and the elisp reference manual. Comint is the low level general shell in a buffer functionality (it is what shell mode uses).
Reading between the lines of your post, I would also suggest you have a look at emacspeak. and speechd.el, both of which are both packages which add speech to emacs. Speechd.el is bare bones and uses speech-dispatcher while emacspeak is very feature rich. The emacspeak package uses a Tcl script which communicates with hardware or software speech servers. It also has a mac version written in python which communicates with the OSX accessiblity (voiceOver) subsystem. Looking at how these packages work will likely give you good examples on how to make yours do what you want. | 0 | 1 | 0 | 0 | 2016-03-24T06:19:00.000 | 4 | 1.2 | true | 36,194,303 | 0 | 0 | 0 | 2 | I written a simple shell in python and compiled it with nuitka.
My shell as some simple commands, such as "say string", "braille string", "stop" etc.
This program uses python accessible_output package to communicate with screen reader in windows.
Ok, this works well froma a normal shell, or executing it from windows.
Now, I would like run this program from within emacs, such as normal shell in emacs.
I tried some functions, "start-process", "shell-command", but I can't write commands.
My program displays a prompt, like python interpreter, where I can put my commands.
Elisp is able to run python shells, mysql shells, but I'm unable to run my own shell.
Help! |
How can I run my own shell from elisp? | 36,230,312 | 0 | 0 | 154 | 0 | python,shell,emacs,elisp | What about just launching your script from inside an emacs shell buffer?
M-x shell RET /path/to/my/script RET | 0 | 1 | 0 | 0 | 2016-03-24T06:19:00.000 | 4 | 0 | false | 36,194,303 | 0 | 0 | 0 | 2 | I written a simple shell in python and compiled it with nuitka.
My shell as some simple commands, such as "say string", "braille string", "stop" etc.
This program uses python accessible_output package to communicate with screen reader in windows.
Ok, this works well froma a normal shell, or executing it from windows.
Now, I would like run this program from within emacs, such as normal shell in emacs.
I tried some functions, "start-process", "shell-command", but I can't write commands.
My program displays a prompt, like python interpreter, where I can put my commands.
Elisp is able to run python shells, mysql shells, but I'm unable to run my own shell.
Help! |
How to start python simpleHTTPServer on Windows 10 | 36,223,473 | 10 | 5 | 25,052 | 0 | python,windows-10,simplehttpserver | Ok, so different commands is apparently needed.
This works:
C:\pathToIndexfile\py -m http.server
As pointed out in a comment, the change to "http.server" is not because of windows, but because I changed from python 2 to python 3. | 0 | 1 | 1 | 0 | 2016-03-25T15:55:00.000 | 3 | 1.2 | true | 36,223,345 | 0 | 0 | 0 | 1 | I recently bought a Windows 10 machine and now I want to run a server locally for testing a webpage I am developing.
On Windows 7 it was always very simple to start a HTTP Server via python and the command prompt. Fx writing the below code would fire up a HTTP server and I could watch the website through localhost.
C:\pathToIndexfile\python -m SimpleHTTPServer
This does however not seems to work on Windows 10...
Does anyone know how to do this on Windows 10? |
How to perform sql schema migrations in app engine managed vm? | 36,407,336 | -2 | 1 | 180 | 1 | python,google-app-engine,google-cloud-sql,gcloud | SQL schema migration is a well-known branch of SQL DB administration which is not specific to Cloud SQL, which is mainly different to other SQL systems in how it is deployed and networked. Other than this, you should look up schema migration documentation and articles online to learn how to approach your specific situation. This question is too broad for Stack Overflow as it is, however. Best of luck! | 0 | 1 | 0 | 0 | 2016-03-26T02:54:00.000 | 1 | 1.2 | true | 36,231,114 | 0 | 0 | 1 | 1 | I'm currently using google cloud sql 2nd generation instances to host my database. I need to make a schema change to a table but Im not sure what the best way to do this.
Ideally, before I deploy using gcloud preview app deploy my migrations will run so the new version of the code is using the latest schema. Also, if I need to rollback to an old version of my app the migrations should run for that point in time. Is there a way to integrate sql schema migrations with my app engine deploys?
My app is app engine managed VM python/flask. |
couldn't delete file.dll using python script | 36,264,203 | 2 | 0 | 531 | 0 | python,dll | Because it is .dll that you are trying to delete there is a big chance that the file is in use, and therefore can't be deleted.
Try to see if you can delete it manually first. | 0 | 1 | 0 | 0 | 2016-03-28T14:09:00.000 | 1 | 0.379949 | false | 36,264,081 | 1 | 0 | 0 | 1 | I wrote a simple script to delete a few files from some directories, I have to delete all the .exe files and all the .dll files. I manage to delete the .exe files using os.remove("path_name") but when I am trying to delete the .dll files I get "Windows Error: [Error 267] The directory name is invalid". I am adding my code below and I hope someone can help me solve the problem.
for name in dirs:
dirPath = RES_PATH + "\\" + name
dirsInside = os.listdir(dirPath)
LOG_FILE = open(dirPath + "\\log.log", 'w')
for doc in dirsInside:
if (".exe" in doc):
os.remove(dirPath + "\\" + doc)
elif (".dll" in doc):
shutil.rmtree(os.path.join(dirPath, doc))
if ("ResultFile.txt" in doc):
pathToResultFile = dirPath + "\\" + doc
fileResult = open(pathToResultFile, 'r')
lines = fileResult.readlines()
thanks in advance.
EDIT
when I am trying to use the os.unlink() I get:
"WindowsError: [Error 5] Access is denied"
for the .dll file (the .exe file is deleted as it should) |
`virtualenv` with Python 3.5 on Ubuntu 15.04 | 36,312,213 | 2 | 2 | 906 | 0 | python,virtualenv,python-3.5,ubuntu-15.04 | You can just install the latest version of python. You can also download and install the different versions in your user's home dir.
In case you are planning to have multiple versions installed manually. This is from the offical python README file.
Installing multiple versions
On Unix and Mac systems if you intend to install multiple versions of Python using the same installation prefix (--prefix argument to the configure script) you must take care that your primary python executable is not overwritten by the installation of a different version. All files and directories installed using "make altinstall" contain the major and minor version and can thus live side-by-side. "make install" also creates ${prefix}/bin/python3 which refers to ${prefix}/bin/pythonX.Y. If you intend to install multiple versions using the same prefix you must decide which version (if any) is your "primary" version. Install that version using "make install". Install all other versions using "make altinstall".
For example, if you want to install Python 2.5, 2.6 and 3.0 with 2.6 being the primary version, you would execute "make install" in your 2.6 build directory and "make altinstall" in the others.
Once done that you can continue using the virtual environment for python using the python version of your choice. | 0 | 1 | 0 | 0 | 2016-03-30T14:34:00.000 | 1 | 0.379949 | false | 36,312,022 | 1 | 0 | 0 | 1 | I've never used virtualenv, I'm working on Ubuntu 15.04 (remotely via ssh), and I've been told I can't make any changes to system the Pythons. Ubuntu 15.04 comes with Pythons 2.7 and 3.4.3, but I want Python 3.5 in my virtualenv. I've tried virtualenv -p python3.5 my_env and it gives The executable python3.5 (from --python=python3.5) does not exist, which I take to mean that it's complaining about the system not having Python 3.5. So, is it impossible to create a virtualenv with Python 3.5, if the system does not already have Python 3.5? |
How to Enable Scrolling for Python Console Application | 36,342,079 | 0 | 1 | 4,402 | 0 | python,python-2.7,scroll,console-application,windows-console | You can not do it from you python script (OK, it is possible, but most probably you don't want to do it). Scrolling depends on environment (Windows or Linux terminal, doesn't matter). So it is up to users to set it up in a way that is good for them.
On Linux you can use less or more:
python script.py | less
it will buffers output from the script and will let user ability to scroll up and down without losing any information. | 0 | 1 | 0 | 0 | 2016-03-31T19:18:00.000 | 2 | 0 | false | 36,341,867 | 1 | 0 | 0 | 1 | I have a Python (2.7) console application that I have written on Windows (8.1). I have used argparse.ArgumentParser() for handling the parameters when executing the program.
The application has quite a few parameters, so when the --help parameter is used the documentation greatly exceeds the size of the console window. Even with the console window maximized. Which is fine, but the issue I'm encountering is that the user is unable to scroll up to view the rest of the help documentation. I have configured my windows console properties appropriately, such as the "Window Size" and "Screen Buffer Size". And I have verified that those changes are working, but they only work outside of the Python environment. As soon as I execute a Python script or run a --help command for a script, the console properties no longer apply. The scroll bar will disappear from the window and I can no longer scroll to the top to see the previous content.
So basically, I need to figure out how to enable scrolling for my Python console programs. I need scrolling enabled both when executing a script and when viewing the --help documentation. I'm not sure how to go about doing that. I have been searching online for any info on the subject and I have yet to find anything even remotely helpful.
At this point, I am completely stuck. So if someone knows how to get scrolling to work, I would greatly appreciate your help. |
'pip' is not recognized as a command | 36,357,798 | 0 | 0 | 629 | 0 | python,command,pip | Try opening a new command prompt and run pip from that. Sometimes changes to environment variables don't propagate to already-open prompts. | 0 | 1 | 0 | 0 | 2016-04-01T13:12:00.000 | 1 | 0 | false | 36,357,301 | 1 | 0 | 0 | 1 | I installed pip-8.1.1. The windows prompted that it was successfully installed. After I added 'C:\Python27\Scripts;' to Path in System Variables, the system still couldn't recognize 'pip' as a command.
Here is the error message: 'pip' is not recognized as an internal or external command, operable program or batch file.
How do I correctly add pip to Path? |
How do I run a python script using an already running blender? | 37,028,985 | 2 | 6 | 1,547 | 0 | python,process,blender | My solution was to launch Blender via console with a python script (blender --python script.py) that contains a while loop and creates a server socket to receive requests to process some specific code. The loop will prevent blender from opening the GUI, and the socket will handle the multiple requests inside the same blender process. | 0 | 1 | 0 | 0 | 2016-04-01T16:03:00.000 | 2 | 0.197375 | false | 36,360,876 | 1 | 0 | 0 | 1 | Normally, I would use "blender -P script.py" to run a python script. In this case, a new blender process is started to execute the script. What I am trying to do now is to run a script using a blender process that is already running, instead of starting a new one.
I have not seen any source on this issue so far, which makes me concern about the actual feasibility of this approach.
Any help would be appreciated. |
tail a log file in Robot Framework | 36,364,642 | 3 | 2 | 1,238 | 0 | python,testing,robotframework | There's no need to do tail -f. At the start of your test you can get the number of bytes in the file. Let the test run, and then read the file starting at the byte offset that you calculated earlier (or read the whole file, and use a slice to look at the new data) | 0 | 1 | 0 | 0 | 2016-04-01T17:22:00.000 | 2 | 1.2 | true | 36,362,294 | 0 | 0 | 0 | 1 | I want to open a file and tail -f the output. I'd like to be able to open the file at the beginning of my test in a subprocess, execute the test, then process the output starting from the beginning of the tail.
I've tried using Run Process, but that just spins, as the process never terminates. I tried using Start Process followed by Get Process Result, but I get an error saying Getting results of unfinished processes is not supported.
Is this possible? |
How to disable/enable device using devcon.exe in python for some iteration | 43,858,051 | -1 | 2 | 992 | 0 | python,windows,devcon | I know this is very late. But your problem can be solved by the subprocess module. Let me know if you are unable to find what you need, and I will then post the code. Thanks | 0 | 1 | 0 | 0 | 2016-04-02T18:57:00.000 | 1 | -0.197375 | false | 36,377,506 | 0 | 0 | 0 | 1 | I am looking for solution, I want to disable/enable the particular device in windows system using devcon.exe in python script.
I am able to disable/enable using devcon.exe windows cmd.exe separately but i am looking for this activity to done using python script to verify 10 iteration.
I need to automate one test case which will has to verify the disabling/enabling particular device in windows using devcon.exe for 10 iteration continuously and record log. |
Estimated Cost field is missing in Appengine's new Developer Console | 36,388,402 | 1 | 0 | 18 | 0 | google-app-engine,google-app-engine-python | App Engine > Dashboard
This view shows how much you are charged so far during the current billing day, and how many hours you still have until the reset of the day. This is equivalent to what the old console was showing, except there is no "total" line under all charges.
App Engine > Quotas
This view shows how much of each daily quota have been used.
App Engine > Quotas > View Usage History
This view gives you a summary of costs for each of the past 90 days. Clicking on a day gives you a detailed break-down of all charges for that day. | 0 | 1 | 0 | 0 | 2016-04-03T14:16:00.000 | 1 | 0.197375 | false | 36,386,528 | 0 | 0 | 1 | 1 | In the old (non-Ajax) Google Appengine's Developer Console Dashboard - showed estimated cost for the last 'n' hours. This was useful to quickly tell how the App engine is doing vis-a-vis the daily budget.
This field seems to be missing in the new Appengine Developer Console. I have tried to search various tabs on the Console and looked for documentation, but without success.
Looking for any pointers as to how do I get to this information in the new Console and any help/pointers are highly appreciated ! |
Start a process with python and get the PID (Linux) | 36,391,683 | 4 | 2 | 451 | 0 | python,linux,alsa | The pid attribute of the subprocess.Popen object contains its PID, but if you need to terminate the subprocess then you should just use the terminate() method.
You should consider using pyao or pygst/gst-python instead though, if you need finer control over audio. | 0 | 1 | 0 | 1 | 2016-04-03T21:50:00.000 | 2 | 1.2 | true | 36,391,651 | 0 | 0 | 0 | 1 | I'm working with Iot on the Intel Galileo with a Yocto image, I have it that a python script will execute 'aplay audio.wav', But I want it to also get the PID of that aplay proccess in case the program will have to stop it. Sorry for being very short and brief. |
How to check whether someone enters a Path directory on Windows or Mac? | 36,409,314 | 0 | 1 | 32 | 0 | python-3.x,file-io,path | Actually, unless you are using the new pathlib, the thing returned in both cases is just a str.
Also, NT accepts / as a path delimeter and to posix \ is just another character.
So -- no, you can't tell, at least not without trying to use the path; and that will only tell you if something is wrong, not if something can work. | 0 | 1 | 0 | 0 | 2016-04-04T16:43:00.000 | 2 | 0 | false | 36,408,297 | 0 | 0 | 0 | 2 | Macs return a Posixpath when the user enters a path. Windows returns a WindowsPath object when the user does the same thing. Is there a way for me to check whether the input is valid depending on the machine? |
How to check whether someone enters a Path directory on Windows or Mac? | 36,408,579 | 1 | 1 | 32 | 0 | python-3.x,file-io,path | os.path.sep gives you the path separator for the platform, \\ for windows and / for unix.
But the thing is if you need this to implement a if else, then don't do that way. The os.path functions are aware of platform specific behavior and they will take care of it. | 0 | 1 | 0 | 0 | 2016-04-04T16:43:00.000 | 2 | 1.2 | true | 36,408,297 | 0 | 0 | 0 | 2 | Macs return a Posixpath when the user enters a path. Windows returns a WindowsPath object when the user does the same thing. Is there a way for me to check whether the input is valid depending on the machine? |
dynamic routing , tornado deployment in production | 36,427,016 | 1 | 1 | 412 | 0 | python,nginx,dynamic,routing,tornado | When updating your python code you can start a new set of python processes on different ports (say 8001-8004 for the previous set and 8011-8014 for the new set) then modify you nginx config to redirect to 8011-8014 instead 8001-8004 and run service nginx reload (or equivalent for your OS).
This way, nginx will redirect new requests to the new processes without dropping any request and finishing pending ones on the previous processes. When you know that all pending requests to the old set of python processes are finished (that might be non trivial) you can stop them. | 0 | 1 | 0 | 0 | 2016-04-05T10:06:00.000 | 1 | 1.2 | true | 36,423,257 | 0 | 0 | 0 | 1 | I have 4 python tornado threads running on different ports on different machines. I used nginx to route and load balance. The code is the same for all of them. It is a asynchronous code. I also have a local file, lets say function.py on each machine that gets called by the python thread, does some computation and returns the answer.
My requirement is that I may need to periodically update the function.py file. However, I do not want the server to be stopped to reload the function since I don't want to drop any incoming request. I am open to changing nginx to something else if required. Any suggestions will be appreciated. Thanks!
Edit:
Could there be a way to modify/configure the nginx in such a way that it will redirect to certain servers(say port 8011-8014) only when they are up ? In that case I can modify the main python threads and then gracefully shut down port 8011-8014. But is this type of configuration feasible ? |
Jupyter: "notebook" is not a jupyter command | 36,457,255 | 1 | 1 | 2,769 | 0 | python,ipython,jupyter | After trying a bunch of solution, I found the quickest solution: using conda instead of pip. Or just use anaconda, which provides jupyter, too. | 0 | 1 | 0 | 0 | 2016-04-06T02:58:00.000 | 4 | 0.049958 | false | 36,440,682 | 1 | 0 | 0 | 4 | I want to have jupyter notebook installed. But on my MacBook Pro (OS X El Capitan) and my web server (Debian 7), I get the same error: jupyter notebook is not a jupyter command.
I just follow the official installation instruction. And no error occurs during installation.
I searched for solutions but none of them works. What should I do now? |
Jupyter: "notebook" is not a jupyter command | 41,560,474 | 0 | 1 | 2,769 | 0 | python,ipython,jupyter | I meet withe the same problem when I use fish,when I switch to bash,everything works well! | 0 | 1 | 0 | 0 | 2016-04-06T02:58:00.000 | 4 | 0 | false | 36,440,682 | 1 | 0 | 0 | 4 | I want to have jupyter notebook installed. But on my MacBook Pro (OS X El Capitan) and my web server (Debian 7), I get the same error: jupyter notebook is not a jupyter command.
I just follow the official installation instruction. And no error occurs during installation.
I searched for solutions but none of them works. What should I do now? |
Jupyter: "notebook" is not a jupyter command | 41,647,660 | 0 | 1 | 2,769 | 0 | python,ipython,jupyter | In my case, it was a matter of agreeing to the Xcode License Agreement with: sudo xcodebuild -license, before running sudo pip install notebook. | 0 | 1 | 0 | 0 | 2016-04-06T02:58:00.000 | 4 | 0 | false | 36,440,682 | 1 | 0 | 0 | 4 | I want to have jupyter notebook installed. But on my MacBook Pro (OS X El Capitan) and my web server (Debian 7), I get the same error: jupyter notebook is not a jupyter command.
I just follow the official installation instruction. And no error occurs during installation.
I searched for solutions but none of them works. What should I do now? |
Jupyter: "notebook" is not a jupyter command | 48,656,548 | 0 | 1 | 2,769 | 0 | python,ipython,jupyter | What worked for me was to use the following command in MacOS High Sierra 10.13.
$HOME/anaconda3/bin/activate | 0 | 1 | 0 | 0 | 2016-04-06T02:58:00.000 | 4 | 0 | false | 36,440,682 | 1 | 0 | 0 | 4 | I want to have jupyter notebook installed. But on my MacBook Pro (OS X El Capitan) and my web server (Debian 7), I get the same error: jupyter notebook is not a jupyter command.
I just follow the official installation instruction. And no error occurs during installation.
I searched for solutions but none of them works. What should I do now? |
using os.system() to run a command without root | 36,445,985 | 4 | 2 | 1,614 | 0 | python,root,python-2.x | Try to use os.seteuid(some_user_id) before os.system("some bash command"). | 0 | 1 | 0 | 0 | 2016-04-06T08:42:00.000 | 2 | 1.2 | true | 36,445,861 | 0 | 0 | 0 | 2 | I have a python 2 script that is run as root. I want to use os.system("some bash command") without root privileges, how do I go about this? |
using os.system() to run a command without root | 47,768,790 | -1 | 2 | 1,614 | 0 | python,root,python-2.x | I have test on my PC. If you run the python script like 'sudo test.py' and the question is resolved. | 0 | 1 | 0 | 0 | 2016-04-06T08:42:00.000 | 2 | -0.099668 | false | 36,445,861 | 0 | 0 | 0 | 2 | I have a python 2 script that is run as root. I want to use os.system("some bash command") without root privileges, how do I go about this? |
Python struct on windows | 36,464,750 | 0 | 0 | 50 | 0 | php,python,struct | It ended up being python gzip, that shiftet all bytes. Destroying the data. | 0 | 1 | 0 | 1 | 2016-04-06T21:43:00.000 | 1 | 0 | false | 36,462,908 | 0 | 0 | 0 | 1 | I have created a python program using struct, that saves data in files. The data consists of a header (300 chars) and data (36000 int float pairs). On ubuntu this works and i can unpack the data for my php setup.
I unpack the data in php by loading the content into a string and using unpack. I quickly found that 1 pair off int float, consumed the same as 8 chars in the php string.
when I then moved this to windows, the data didn't take as much space, and when i try to unpack them in php, they seem to get unaligned from the binary string quickly.
Is there any way to get the struct in php to use the architecture to produce the same output as ubuntu?
I have tried the alligment options with struct (<,>,!,=).
My ubuntu dev setup is 64bit and the server is also 64bit. I have tried using both 32bit python and 64bit python on the windows server. |
How to serve Beaker Notebook from different directory | 37,124,128 | 0 | 2 | 178 | 0 | ipython,beaker-notebook | If you want to change the current working directory, I don't think that's possible.
But if you want to serve files as in make them available to the web server that creates the page, use ~/.beaker/v1/web as described in the "Generating and accessing web content" tutorial. | 0 | 1 | 0 | 0 | 2016-04-07T19:25:00.000 | 2 | 0 | false | 36,485,392 | 1 | 0 | 0 | 1 | Trying to experiment with Beaker Notebooks, but I can not figure out how to launch from a specified directory. I've downloaded the .zip file (I'm on Windows 10), and can launch from that directory using the beaker.command batch file, but cannot figure out where to configure or set a separate launch directory for a specific notebook. With Jupyter notebooks, launching from the saved .ipynb file serves from that directory, but I cannot figure out how to do the same for Beaker notebooks.
Does anyone know the correct method to serve a Beaker Notebook from various parent directories?
Thanks. |
Permission denied or Host key problems | 36,502,436 | 0 | 0 | 39 | 0 | python,bash,apache,server,pyramid | Well you changed the owner of the files to root, and then you ran as root, and it worked, so that makes sense. The problem is that root isn't necessarily the user executing the script in your webapp. You need to find which user is trying to execute the script, and then change the files' ownership to that user (depending on how the scripts are invoked, you may need to chmod them as well to make sure they are executable) | 0 | 1 | 0 | 1 | 2016-04-08T08:05:00.000 | 2 | 0 | false | 36,494,553 | 0 | 0 | 0 | 1 | I have a web application which is written with python (Pyramid) and in the apache server, inside of the one of the Python we are launching a SH file which is a service to sending SMS.
The problem is always the permission is denied.
we tried the run the SH file by login into the root and it works.
we changed the owner of the both files Python one and SH one to 'root' but not works!
any ideas?! |
Asynchronous task queues and asynchronous IO | 36,518,663 | 3 | 7 | 2,039 | 0 | python,asynchronous,concurrency,celery | Asynchronous IO is a way to use sockets (or more generally file descriptors) without blocking. This term is specific to one process or even one thread. You can even imagine mixing threads with asynchronous calls. It would be completely fine, yet somewhat complicated.
Now I have no idea what asynchronous task queue means. IMHO there's only a task queue, it's a data structure. You can access it in asynchronous or synchronous way. And by "access" I mean push and pop calls. These can use network internally.
So task queue is a data structure. (A)synchronous IO is a way to access it. That's everything there is to it.
The term asynchronous is havily overused nowadays. The hype is real.
As for your second question:
Message is just a set of data, a sequence of bytes. It can be anything. Usually these are some structured strings, like JSON.
Task == message. The different word is used to notify the purpose of that data: to perform some task. For example you would send a message {"task": "process_image"} and your consumer will fire an appropriate function.
Task queue Q is a just a queue (the data structure).
Producer P is a process/thread/class/function/thing that pushes messages to Q.
Consumer (or worker) C is a process/thread/class/function/thing that pops messages from Q and does some processing on it.
Message broker B is a process that redistributes messages. In this case a producer P sends a message to B (rather then directly to a queue) and then B can (for example) duplicate this message and send to 2 different queues Q1 and Q2 so that 2 different workers C1 and C2 will get that message. Message brokers can also act as protocol translators, can transform messages, aggregate them and do many many things. Generally it's just a blackbox between producers and consumers.
As you can see there are no formal definitions of those things and you have to use a bit of intuition to fully understand them. | 0 | 1 | 0 | 0 | 2016-04-09T14:50:00.000 | 2 | 0.291313 | false | 36,518,400 | 0 | 0 | 0 | 1 | As I understand, asynchronous networking frameworks/libraries like twisted, tornado, and asyncio provide asynchronous IO through implementing nonblocking sockets and an event loop. Gevent achieves essentially the same thing through monkey patching the standard library, so explicit asynchronous programming via callbacks and coroutines is not required.
On the other hand, asynchronous task queues, like Celery, manage background tasks and distribute those tasks across multiple threads or machines. I do not fully understand this process but it involves message brokers, messages, and workers.
My questions,
Do asynchronous task queues require asynchronous IO? Are they in any way related? The two concepts seem similar, but the implementations at the application level are different. I would think that the only thing they have in common is the word "asynchronous", so perhaps that is throwing me off.
Can someone elaborate on how task queues work and the relationship between the message broker (why are they required?), the workers, and the messages (what are messages? bytes?).
Oh, and I'm not trying to solve any specific problems, I'm just trying to understand the ideas behind asynchronous task queues and asynchronous IO. |
Share an Intellij Python Project Between Windows and OSX | 36,536,584 | 0 | 0 | 22 | 0 | python,intellij-idea | You can always put your .idea folder into .gitignore (or equivalent for your VS) file and prevent sharing it between systems and developers.
If you don't have project-specific settings there - it's recommended way to do it. | 0 | 1 | 0 | 0 | 2016-04-10T21:58:00.000 | 1 | 0 | false | 36,536,019 | 0 | 0 | 0 | 1 | I have an Intellij 15 Python project created using the Python plugin. I'm on OSX and have the Project Python SDK set to /usr/bin/python. I've got another developer on my team that is on Windows and his SDK is at C:\Python27\Python.exe
Is there any way I can share this project via source control without us both stepping over each other toes each time I update and need to change the SDK? |
de-Bazel-ing TensorFlow Serving | 37,579,943 | 0 | 4 | 443 | 0 | python,tensorflow,tensorflow-serving | You are close, you need to update the environment as they do in this script
.../serving/bazel-bin/tensorflow_serving/example/mnist_export
I printed out the environment update, did it manually
export PYTHONPATH=...
then I was able to import tensorflow_serving | 0 | 1 | 0 | 0 | 2016-04-11T23:47:00.000 | 1 | 0 | false | 36,561,231 | 0 | 1 | 0 | 1 | While I admire, and am somewhat baffled by, the documentation's commitment to mediating everything related to TensorFlow Serving through Bazel, my understanding of it is tenuous at best. I'd like to minimize my interaction with it.
I'm implementing my own TF Serving server by adapting code from the Inception + TF Serving tutorial. I find the BUILD files intimidating enough as it is, and rather than slogging through a lengthy debugging process, I decided to simply edit BUILD to refer to the .cc file, in lieu of also building the python stuff which (as I understand it?) isn't strictly necessary.
However, my functional installation of TF Serving can't be imported into python. With normal TensorFlow you build a .whl file and install it that way; is there something similar you can do with TF Serving? That way I could keep the construction and exporting of models in the realm of the friendly python interactive shell rather than editing it, crossing all available fingers, building in bazel, and then /bazel-bin/path/running/whatever.
Simply adding the directory to my PYTHONPATH has so far been unsuccessful.
Thanks! |
If I have a python program running, can I edit the .py file that it is running from? | 36,563,699 | 4 | 3 | 264 | 0 | python | Yes, Python is not constantly reading the file, the file is only interpreted once per run. The current instance that is already running will not be affected by changes in the script. | 0 | 1 | 0 | 0 | 2016-04-12T04:33:00.000 | 2 | 1.2 | true | 36,563,679 | 1 | 0 | 0 | 2 | If I have a long running process that is running from file.py, can I edit file.py while it is running and run it again, starting a new process and not affect the already running process? |
If I have a python program running, can I edit the .py file that it is running from? | 36,563,817 | 0 | 3 | 264 | 0 | python | Of course you can.
When you are running the first process, the unmodified code is loaded into memory,like a copy in memory. When you edit the running code, it makes another copy into memory, you won't change the original one.
And even though you click save, it won't make any change to the code in the memory that the first process is using.
But as you say, your program is very long. If you change a package the program hasn't used, it may cause the problem, coz the import part is loaded when the program execute the import part. | 0 | 1 | 0 | 0 | 2016-04-12T04:33:00.000 | 2 | 0 | false | 36,563,679 | 1 | 0 | 0 | 2 | If I have a long running process that is running from file.py, can I edit file.py while it is running and run it again, starting a new process and not affect the already running process? |
How to make the command-line / interpreter pane/window bigger in pudb? | 36,564,003 | 29 | 18 | 1,627 | 0 | python,pudb | Put the focus in the command-line / interpreter pane (using Ctrl-x).
Use the right-arrow key to put the focus on the Clear button. (the background changes color to indicate it is selected)
Now use any of the following commands:
_ (underscore; makes that pane the smallest size possible)
= (equals; makes that pane the largest size possible)
+ (plus; increases the size of that pane with each press)
- (minus; decreases the size of that pane with each press) | 0 | 1 | 0 | 0 | 2016-04-12T05:04:00.000 | 1 | 1.2 | true | 36,564,002 | 0 | 0 | 0 | 1 | Is there any way to resize the command-line / interpreter window/pane in pudb, just like the size of the side pane can be adjusted? |
How to make IDLE window more readable? Font too small for me to see | 37,742,333 | 4 | 2 | 8,590 | 0 | python,window,python-idle | On the top menu, choose Options, then Configure IDLE. The Fonts/Tabs tab will be displayed. There is a Size button. Click on it and select a bigger size than the default of 10. At present, this only only affects the font in Shell, editor, and output windows, but that is the main issue for me.
EDIT: correct Options menu entry as suggested by Nathan Wailes. IDLE Preferences was once the title of the resulting dialog, but it is now Settings. | 0 | 1 | 0 | 0 | 2016-04-14T07:42:00.000 | 2 | 0.379949 | false | 36,616,705 | 1 | 0 | 0 | 1 | I can't seem to figure out how to zoom in or anything, and the font is so small. Thank you for your help. I'm using a mac if that helps |
set permission(777) for all the files, subdirectories of a directory in python without any loop | 36,621,405 | 0 | 0 | 317 | 0 | python-2.7,permissions | You can call subprocess and run normal system command from there:
did not test but I think this should work:
subprocess.call(["chmod", "-R 777 /PATH"]) | 0 | 1 | 0 | 0 | 2016-04-14T11:07:00.000 | 1 | 0 | false | 36,621,256 | 1 | 0 | 0 | 1 | I want to set permission (777) to my directory ( including all the files and subdirectories ) in one line, don't want to use any os.walk or for loop |
Running processes (mainly python) over several machines | 36,626,600 | 1 | 1 | 157 | 0 | python,multithreading,parallel-processing,job-scheduling | In addition to your "organiser-script" you will need some program/script on each of the other machines, that listens on the network for commands from the "organiser-script", starts "workers" and reports when "workers" have finished.
But there are existing solutions for your task. Take a good look around before you start coding. | 0 | 1 | 0 | 0 | 2016-04-14T14:31:00.000 | 1 | 0.197375 | false | 36,626,267 | 1 | 0 | 0 | 1 | I would like to be able to run multiple, typically long processes, over different machines connected over a local network.
Processes would generally be python scripts.
In other words, suppose that I have 100 processes and 5 machines, and I don't want to run more than 10 processes on each machine at the same time.
My "organiser-script" would then start 10 processes per machine, then send the next ones as the first ones end.
Is there any way to do this in python?
Any suggestion would be very much appreciated!
Thank you! |
Streaming values in a python script to a wep app | 36,669,596 | 0 | 0 | 63 | 0 | python,azure,web-applications,azure-webjobs | You would need to provide some more information about what kind of interface your web app exposes. Does it only handle normal HTTP1 requests or does it have a web socket or HTTP2 type interface? If it has only HTTP1 requests that it can handle then you just need to make multiple requests or try and do long polling. Otherwise you need to connect with a web socket and stream the data over a normal socket connection. | 0 | 1 | 0 | 0 | 2016-04-16T20:42:00.000 | 2 | 0 | false | 36,669,500 | 0 | 0 | 1 | 2 | I have a python script that runs continuously as a WebJob (using Microsoft Azure), it generates some values (heart beat rate) continuously, and I want to display those values in my Web App.
I don't know how to proceed to link the WebJob to the web app.
Any ideas ? |
Streaming values in a python script to a wep app | 36,671,291 | 1 | 0 | 63 | 0 | python,azure,web-applications,azure-webjobs | You have two main options:
You can have the WebJobs write the values to a database or to Azure Storage (e.g. a queue), and have the Web App read them from there.
Or if the WebJob and App are in the same Web App, you can use the file system. e.g. have the WebJob write things into %home%\data\SomeFolderYouChoose, and have the Web App read from the same place. | 0 | 1 | 0 | 0 | 2016-04-16T20:42:00.000 | 2 | 1.2 | true | 36,669,500 | 0 | 0 | 1 | 2 | I have a python script that runs continuously as a WebJob (using Microsoft Azure), it generates some values (heart beat rate) continuously, and I want to display those values in my Web App.
I don't know how to proceed to link the WebJob to the web app.
Any ideas ? |
Build and use Python without make install | 36,673,452 | 1 | 1 | 1,409 | 0 | python,python-3.x,makefile,installation | If you don't want to copy the binaries you built into a shared location for system-wide use, you should not make install at all. If the build was successful, it will have produced binaries you can run. You may need to set up an environment for making them use local run-time files instead of the system-wide ones, but this is a common enough requirement for developers that it will often be documented in a README or similar (though as always when dealing with development sources, be prepared that it might not be as meticulously kept up to date as end-user documentation in a released version). | 0 | 1 | 0 | 0 | 2016-04-17T05:55:00.000 | 1 | 0.197375 | false | 36,673,170 | 1 | 0 | 0 | 1 | I just downloaded Python sources, unpacked them to /usr/local/src/Python-3.5.1/, run ./configure and make there. Now, according to documentation, I should run make install.
But I don't want to install it somewhere in common system folders, create any links, change or add environment variables, doing anything outside this folder. In other words, I want it to be portable. How do I do it? Will /usr/local/src/Python-3.5.1/python get-pip.py install Pip to /usr/local/src/Python-3.5.1/Lib/site-packages/? Will /usr/local/src/Python-3.5.1/python work properly?
make altinstall, as I understand, still creates links what is not desired. Is it correct that it creates symbolic links as well but simply doesn't touch /usr/bin/python and man?
Probably, I should do ./configure prefix=some/private/path and just make and make install but I still wonder if it's possible to use Python make install. |
Certificate Error while Deploying python code in Google App Engine | 36,673,729 | 1 | 0 | 97 | 0 | python,google-app-engine | Upgrading Python to 2.7.8 or later versions fixed the issue.
EDIT:
Also check if you are using google app engine SDK 1.8.1 or later version. As of version SDK 1.8.1 the cacerts.txt has been renamed to urlfetch_cacerts.txt. You can try removing cacerts.txt file to fix the problem. | 0 | 1 | 0 | 0 | 2016-04-17T07:07:00.000 | 1 | 1.2 | true | 36,673,670 | 0 | 0 | 1 | 1 | I tried deploying python code using google app engine.
But I got Error Below:
certificate verify failed
I had included proxy certificate in urlfetch_cacerts.py and enabled 'validate_certificate' in urlfetch_stub.py by _API_CALL_VALIDATE_CERTIFICATE_DEFAULT = True.But I still get the error..
Can you suggest any solution?
Thanks in advance. |
Eclipse pydev warning - "Debugger speedups using cython not found." | 37,348,293 | 5 | 18 | 20,938 | 0 | eclipse,python-3.x,cython,pydev | simply copy all the command "/usr/bin/python3.5" "/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py" build_ext --inplace ,
paste in a command line terminal (tipically bash shell) and press return :) | 0 | 1 | 0 | 0 | 2016-04-17T18:23:00.000 | 9 | 0.110656 | false | 36,680,422 | 1 | 0 | 0 | 5 | I get this warning while running a python program (some basic web automation using selenium):
warning: Debugger speedups using cython not found. Run
'"/usr/bin/python3.5"
"/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py"
build_ext --inplace' to build. pydev debugger: starting (pid: 3502)
How can I address this issue? |
Eclipse pydev warning - "Debugger speedups using cython not found." | 50,884,895 | 1 | 18 | 20,938 | 0 | eclipse,python-3.x,cython,pydev | I faced a similar issue while using Python3.5 and Eclipse Pydev for debugging. when I tried
>"/usr/bin/python3.5" "/home/frodo/eclipse/plugins/org.python.pydev.core_6.3.3.201805051638/pysrc/setup_cython.py" build_ext --inplace
Traceback (most recent call last):
File "/home/frodo/eclipse/plugins/org.python.pydev.core_6.3.3.201805051638/pysrc/setup_cython.py", line 14, in
from setuptools import setup
ImportError: No module named 'setuptools'
Later I fixed the issue with the below commands to install setuptools and the related python3-dev libraries using
sudo apt-get install python3-setuptools python3-dev
and that resolved the issues while executing the above command. | 0 | 1 | 0 | 0 | 2016-04-17T18:23:00.000 | 9 | 0.022219 | false | 36,680,422 | 1 | 0 | 0 | 5 | I get this warning while running a python program (some basic web automation using selenium):
warning: Debugger speedups using cython not found. Run
'"/usr/bin/python3.5"
"/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py"
build_ext --inplace' to build. pydev debugger: starting (pid: 3502)
How can I address this issue? |
Subsets and Splits