Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Eclipse pydev warning - "Debugger speedups using cython not found." | 53,074,490 | 0 | 18 | 20,938 | 0 | eclipse,python-3.x,cython,pydev | On ubuntu, I needed to do the following in a terminal:
sudo apt-get install build-essential
sudo apt-get install python3-dev
I then copied the full setup path from the error in eclipse and onto my command prompt:
python "/home/mark/.eclipse/360744347_linux_gtk_x86_64/plugins/org.python.pydev.core_6.5.0.201809011628/pysrc/setup_cython.py" build_ext --inplace
It finally compiled and the error message no longer appears. | 0 | 1 | 0 | 0 | 2016-04-17T18:23:00.000 | 9 | 0 | false | 36,680,422 | 1 | 0 | 0 | 5 | I get this warning while running a python program (some basic web automation using selenium):
warning: Debugger speedups using cython not found. Run
'"/usr/bin/python3.5"
"/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py"
build_ext --inplace' to build. pydev debugger: starting (pid: 3502)
How can I address this issue? |
Eclipse pydev warning - "Debugger speedups using cython not found." | 68,233,508 | 0 | 18 | 20,938 | 0 | eclipse,python-3.x,cython,pydev | GNU/Linux / Eclipse 2021-06 / Python 3.6.9, cython installed with apt install cython
Localization of setup_cython.py: find <eclipse binary installation> -name setup_cython.py
Execution : python3 "<previous find result>" build_ext --inplace
That's all folks! | 0 | 1 | 0 | 0 | 2016-04-17T18:23:00.000 | 9 | 0 | false | 36,680,422 | 1 | 0 | 0 | 5 | I get this warning while running a python program (some basic web automation using selenium):
warning: Debugger speedups using cython not found. Run
'"/usr/bin/python3.5"
"/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py"
build_ext --inplace' to build. pydev debugger: starting (pid: 3502)
How can I address this issue? |
Eclipse pydev warning - "Debugger speedups using cython not found." | 36,691,558 | 13 | 18 | 20,938 | 0 | eclipse,python-3.x,cython,pydev | This is as expected. Run"/usr/bin/python3.5" "/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py" build_ext --inplace as it asks to get the debugger accelerations.
(Nb. The error in the comment below was because this answer was missing an initial double quote.)
Ideally run it from within your virtual environment, if you use one, to make sure you run this for the correct Python version. You'll need to run this once per Python version you use. | 0 | 1 | 0 | 0 | 2016-04-17T18:23:00.000 | 9 | 1 | false | 36,680,422 | 1 | 0 | 0 | 5 | I get this warning while running a python program (some basic web automation using selenium):
warning: Debugger speedups using cython not found. Run
'"/usr/bin/python3.5"
"/root/.p2/pool/plugins/org.python.pydev_4.5.5.201603221110/pysrc/setup_cython.py"
build_ext --inplace' to build. pydev debugger: starting (pid: 3502)
How can I address this issue? |
Delete an unfinished file | 36,695,061 | 0 | 0 | 62 | 0 | python,bash,unix,kill-process,diskspace | If you delete a file which is opened in some processes, it's marked as deleted, but the content remains on disk, so that all processes still can read it. Once all processes close corresponding descriptors (or simply finish), the space will be reclaimed. | 0 | 1 | 0 | 1 | 2016-04-18T13:00:00.000 | 2 | 0 | false | 36,694,745 | 0 | 0 | 0 | 1 | I was writing a huge file output.txt (around 10GB) on a server thorugh a python script using the f.write(row) command but because the process was too long I decided to interrupt the program using
kill -9 pid
The problem is that this space is still used on the server when I check with the command
df -h
How can I empty the disk occupied by this buffer that was trying to write the file?
the file output.txt was empty (0 Byte) when I killed the script, but I still deleted it anyway using
rm output.txt
but the space in the disk doesn't become free, I still have 10 GB wasted.. |
PyCharm recognize a module but do not import it | 54,055,599 | 1 | 1 | 413 | 0 | python,module,pycharm,pydrive | After noticing that the module is already installed, both by pip and by the project interpreter, and nothing worked, this what did the trick (finaly!):
make sure the module is indeed installed:
sudo pip{2\3} install --upgrade httplib2
locate the module on your computer:
find / | grep httplib2
you will need to reach the place in which pip is installing the module, the path would probably look like this:
/usr/local/lib/python2.7/dist-packages
get into the path specified there, search for the module and copy all the relevant files and folders into your local pycharm project environment. this will be a directory with a path like this:
/home/your_user/.virtualenvs/project_name/lib/python2.7
this is it. note however you may need to do this multiple times, since each module may have a dependencies...
good luck! | 0 | 1 | 1 | 0 | 2016-04-19T09:01:00.000 | 1 | 1.2 | true | 36,713,581 | 1 | 0 | 0 | 1 | I try to import the PyDrive module in my PyCharm project : from pydrive.auth import GoogleAuth.
I tried different things :
Installing it directly from the project interpreter
Download it with a pip command and import it with the path for the poject interpreter
The same thing in Linux
Nothing works. Each time PyCharm recognize the module and even sugest the auto-completion, but when I run the project it keeps saying ImportError: No module named pydrive.auth
Any suggestion ?
EDIT : When I put directly the pydrive folder in my repository, and this time : ImportError: No module named httplib2 from the first import of PyDrive.
My path is correct and httplib2 is again in my PyCharm project |
./build/tools/caffe: No such file or directory | 36,728,171 | 4 | 2 | 7,193 | 0 | bash,python-2.7,machine-learning,neural-network,deep-learning | Follow the below instructions and see if it works:
Open a terminal
cd to caffe root directory
Make sure the file caffe exists by listing them using ls ./build/tools
If the file is not present, type make. Running step 3 will list the file now.
Type ./build/tools/caffe, No such file error shouldn't get triggered this time. | 0 | 1 | 0 | 0 | 2016-04-19T14:27:00.000 | 2 | 0.379949 | false | 36,721,348 | 0 | 1 | 0 | 2 | I have a question regarding the command for running the training in Linux. I am using GoogleNet model in caffe framework for binary classification of my images. I used the following command to train my dataset
./build/tools/caffe train --solver=models/MyModelGoogLenet/quick_solver.prototxt
But I received this error
bash: ./build/tools/caffe: No such file or directory
How can I resolve this error? Any suggestions would be of great help. |
./build/tools/caffe: No such file or directory | 36,724,914 | 2 | 2 | 7,193 | 0 | bash,python-2.7,machine-learning,neural-network,deep-learning | You should specify absolute paths to all your files and commands, to be on the safer side. If /home/user/build/tools/caffe train still doesn't work, check if you have a build directory in your caffe root. If not, then use /home/user/tools/caffe train instead. | 0 | 1 | 0 | 0 | 2016-04-19T14:27:00.000 | 2 | 1.2 | true | 36,721,348 | 0 | 1 | 0 | 2 | I have a question regarding the command for running the training in Linux. I am using GoogleNet model in caffe framework for binary classification of my images. I used the following command to train my dataset
./build/tools/caffe train --solver=models/MyModelGoogLenet/quick_solver.prototxt
But I received this error
bash: ./build/tools/caffe: No such file or directory
How can I resolve this error? Any suggestions would be of great help. |
Is there a way to stop a command in a docker container | 36,726,873 | 1 | 1 | 1,751 | 0 | python,macos,docker,docker-machine | There are scenarios when I update myprogram.py and need to kill the
command, transfer the updated myprogram.py file to the container, and
execute python myprogram.py again. I imagine this to be a common
scenario.
Not really. The common scenario is either:
Kill existing container
Build new image via your Dockerfile
Boot container from new image
Or:
Start container with a volume mount pointing at your source
Restart the container when you update your code
Either one works. The second is useful for development, since it has a slightly quicker turnaround. | 0 | 1 | 0 | 0 | 2016-04-19T18:38:00.000 | 2 | 0.099668 | false | 36,726,799 | 0 | 0 | 0 | 1 | I have a docker container that is running a command. In the Dockerfile the last line is CMD ["python", "myprogram.py"] . This runs a flask server.
There are scenarios when I update myprogram.py and need to kill the command, transfer the updated myprogram.py file to the container, and execute python myprogram.py again. I imagine this to be a common scenario.
However, I haven't found a way to do this. Since this is the only command in the Dockerfile...I can't seem to kill it. from the containers terminal when I run ps -aux I can see that python myprogram.py is assigned a PID of 1. But when I try to kill it with kill -9 1 it doesn't seem to work.
Is there a workaround to accomplish this? My goal is to be able to change myprogram.py on my host machine, transfer the updated myprogram.py into the container, and execute python myprogram.py again. |
How to create a process that should not be listed in ps -ef command in linux | 36,773,253 | 1 | 0 | 51 | 0 | python,c | It's impossible without modifying either kernel or ps itself. Simple process can't hide itself. But you can change process name by changing argv[0] and mimicry to another common process, like httpd, sshd, etc - that's what a lot of malware does. | 0 | 1 | 0 | 0 | 2016-04-21T10:03:00.000 | 1 | 0.197375 | false | 36,766,231 | 0 | 0 | 0 | 1 | I want to create a process that should not be listed in ps -ef command, while the process is running. I need this for testing an Intrusion detection system (IDS) application in Linux. |
How to reboot a host machine via python script, and on boot resume the script? | 37,038,569 | 2 | 1 | 415 | 0 | linux,python-2.7,debian | One more way to execute your script at booting time by adding below line to root's crontab
@reboot /usr/bin/python /root/simple.py
simple.py -- script need to be executed. | 0 | 1 | 0 | 0 | 2016-04-22T21:14:00.000 | 1 | 1.2 | true | 36,803,416 | 0 | 0 | 0 | 1 | I'm writing a python script on Linux (debian) to install a few things, reboot, and then do more things.
I'm not sure if it's possible, but would I be able to say,
run all of my installers, reboot and resume my script from where it left off?
I would like for the user to not have to do anything (even log onto the machine, if at all possible)
Oh! Also, is there a way to keep(or Store) variables without storing them in plaintext?
Thanks! |
Running python from notepad in cmd | 36,824,295 | 1 | 0 | 45 | 0 | python,python-3.x,cmd,python-import | Save the program with a .py extention. For example: hello.py
Then run it with python <script_name>.py. For example: python hello.py | 0 | 1 | 0 | 0 | 2016-04-24T14:17:00.000 | 1 | 0.197375 | false | 36,824,269 | 1 | 0 | 0 | 1 | I am trying to run a python program
import random
random.random()
Written in notepad in two different lines,I want to run it in cmd.how to do it? |
How to run python script on Jupyter in the terminal? | 36,839,885 | 2 | 1 | 11,122 | 0 | python,linux,jupyter | You can use the jupyter console -i command to run an interactive jupyter session in your terminal. From there you can run import <my_script.py>. Do note that this is not the intended use case of either jupyter or the notebook environment. You should run scripts using your normal python interpreter instead. | 0 | 1 | 0 | 0 | 2016-04-25T11:46:00.000 | 3 | 1.2 | true | 36,839,650 | 1 | 0 | 0 | 2 | I want to execute one python script in Jupyter, but I don't want to use the web browser (IPython Interactive terminal), I want to run a single command in the Linux terminal to load & run the python script, so that I can get the output from Jupyter.
I tried to run jupyter notebook %run <my_script.py>, but it seems jupyter can't recognize %run variable.
Is it possible to do that? |
How to run python script on Jupyter in the terminal? | 41,051,815 | 0 | 1 | 11,122 | 0 | python,linux,jupyter | You can run this command to run an interactive jupyter session in your terminal.
jupyter notebook | 0 | 1 | 0 | 0 | 2016-04-25T11:46:00.000 | 3 | 0 | false | 36,839,650 | 1 | 0 | 0 | 2 | I want to execute one python script in Jupyter, but I don't want to use the web browser (IPython Interactive terminal), I want to run a single command in the Linux terminal to load & run the python script, so that I can get the output from Jupyter.
I tried to run jupyter notebook %run <my_script.py>, but it seems jupyter can't recognize %run variable.
Is it possible to do that? |
Is there a way to define a MongoDB schema using Motor? | 36,842,258 | 2 | 2 | 1,449 | 1 | python,mongodb,tornado-motor,motordriver | No there isn't. Motor is a MongoDB driver, it does basic operations but doesn't provide many conveniences. An Object Document Mapper (ODM) library like MongoTor, built on Motor, provides higher-level features like schema validation.
I don't vouch for MongoTor. Proceed with caution. Consider whether you really need an ODM: mongodb's raw data format is close enough to Python types that most applications don't need a layer between their code and the driver. | 0 | 1 | 0 | 0 | 2016-04-25T12:50:00.000 | 2 | 1.2 | true | 36,841,121 | 0 | 0 | 0 | 1 | There is a way to define MongoDB collection schema using mongoose in NodeJS. Mongoose verifies the schema at the time of running the queries.
I have been unable to find a similar thing for Motor in Python/Tornado. Is there a way to achieve a similar effect in Motor, or is there a package which can do that for me? |
Quit LLDB session after a defined amount of time | 36,847,673 | 1 | 0 | 168 | 0 | python,ios,lldb | There isn't such a thing built into lldb, but presumably you could set a timer in Python and have it kill the debug session if that's appropriate.
Note, when you restart the device, the connection from lldb to the remote debug server should close, and lldb should detect that it closed and quit the process. It won't exit when that happens by default, but presumably whatever you have waiting on debugger events can detect the debuggee's exit and exit or whatever you need it to do.
Note, if lldb is waiting on input from debugserver (if the program is running) then it should notice this automatically, since the select call will return with EOF. But if the process is stopped when you close the connection, lldb probably won't notice that till it goes to read something.
In the latter case, you should be able to have lldb react to the stop that indicates the "needle" is found, and kill the debug session by hand. | 0 | 1 | 0 | 0 | 2016-04-25T12:59:00.000 | 1 | 1.2 | true | 36,841,334 | 0 | 0 | 0 | 1 | I have a Program written in python for automated testing on mobile devices (iOS & Android). The proper workflow of this program is as follows (for smoke tests):
Deploy executable to USB-connected device (.ipa or .app) using ios-deploy
Start Application (debugging process) --> writes to stdout.
Write output into Pipe --> this way it is possible to read the output of the debugging process parallel to it.
If the searched needle is detected in the output, the device is restarted (this is quite a dirty workaround, I am going to insert a force-stop method or something similar)
My Problem is: When the needle is detected in the output of the debug process, the lldb session is interrupted, but not exited. To exit the lldb session, I have to reconnect the device or quit terminal and open it again.
Is there a possibility to append something like a "time-to-live-flag" to the lldb call to determine how long the lldb session should run until it exits auomatically? Another way I can imagine how to exit the lldb session is to join the session again after the device is restarted and then exit it, but it seems that lldb is just a subprocess of ios-deploy. Therefore I have not found any possibility to get access to the lldb process. |
How to create an equivalent of a background thread for an auto-scaling instance | 36,874,638 | 0 | 1 | 153 | 0 | google-app-engine,google-app-engine-python | You can use a cron job that will start a task. In this task, you can call all your instances to clean up expired objects. | 0 | 1 | 0 | 0 | 2016-04-26T19:40:00.000 | 2 | 0 | false | 36,874,278 | 0 | 0 | 1 | 1 | Been reading up a bit on background threads and it seems to only be allowed for backend instance. I have created an LRU instance cache that I want to call period cleanup jobs on to remove all expired objects. This will be used in both frontend and backend instances.
I thought about using deferred or taskqueue but those do not have the option to route a request back to the same instance. Any ideas? |
Run multiple twisted servers? | 36,877,399 | 1 | 0 | 127 | 0 | python,proxy,twisted,reverse-proxy | You can router your scripts with an web framework, like: Django, Flask, Web2Py...
Or, if you prefer you can create an router script for route manually | 0 | 1 | 0 | 0 | 2016-04-26T22:44:00.000 | 1 | 1.2 | true | 36,877,127 | 0 | 0 | 0 | 1 | Is there some way to run multiple twisted servers simultaneously on the same port? So that they would be listening on different directories (for example: example.com/twisted1 is one twisted script, and example.com/twisted2 is another script) |
Some confusions regarding celery in python | 36,891,584 | 0 | 1 | 385 | 0 | python,django,celery | First, just to explain how it works briefly. You have a celery client running in your code. You call tasks.add(1,2) and a new Celery Task is created. That task is transferred by the Broker to the queue. Yes the queue is persisted in Rabbimq or SQS. The Celery Daemon is always running and is listening for new tasks. When there is a new task in the queue, it starts a new Celery Worker to perform the work.
To answer your questions:
Celery daemon is always running and it's starting celery workers.
Yes Rabitmq or SQS is doing the work of a queue.
With the celery monitor you can monitor how many tasks are running, how many are completed, what is the size of the queue, etc. | 0 | 1 | 0 | 0 | 2016-04-26T23:28:00.000 | 2 | 0 | false | 36,877,581 | 0 | 0 | 1 | 1 | I have divided celery into following parts
Celery
Celery worker
Celery daemon
Broker: Rabbimq or SQS
Queue
Result backend
Celery monitor (Flower)
My Understanding
When i hit celery task in django e,g tasks.add(1,2). Then celery adds that task to queue. I am confused if thats 4 or 5 in above list
WHen task goes to queue Then worker gets that task and delete from queue
The result of that task is saved in Result Backend
My Confusions
Whats diff between celery daemon and celery worker
Is Rabbitmq doing the work of queue. Does it means tasks gets saved in Rabitmq or SQS
What does flower do . does it monitor workers or tasks or queues or resulst |
In python, will subprocess.call produce an individual subprocess every time being invoked? | 36,881,946 | 1 | 1 | 56 | 0 | python,subprocess | Yes, a new process is spawned every time you call subprocess.call() or any of its relatives, including Popen(). You do not need to explicitly kill the subprocesses normally--you'd just wait for them to exit. | 0 | 1 | 0 | 0 | 2016-04-27T06:21:00.000 | 2 | 0.099668 | false | 36,881,830 | 1 | 0 | 0 | 1 | If subprocess.call is invoked N times, I wonder if N subprocess will be created or not.
And when will the subprocess close? Should I kill it manually?
What about subprocess.Popen? |
C++ ZeroMQ Single Application with both REQ and REP sockets | 36,903,756 | 1 | 0 | 97 | 0 | python,c++,sockets,zeromq | And this is what happens when you forget to call connect() on the socket... | 0 | 1 | 0 | 1 | 2016-04-28T01:30:00.000 | 1 | 0.197375 | false | 36,903,698 | 0 | 0 | 0 | 1 | I am trying to write an application that uses ZeroMQ to recieve messages from clients. I receive the message from the client in the main loop, and need to send an update to a second socket (general idea is to establish a 'change feed' on objects in the database the application is built on).
Receiving the message works fine, and both sockets are connected without issue. However, sending the request on the outbound port simply hangs, and the test server meant to receive the message does not receive anything.
Is it possible to use both a REQ and REP socket within the same application?
For reference, the main application is C++ and the test server and test client communicating with it are written in Python. They are all running on Ubuntu 14.40. Thanks!
Alex |
Copying a continuously growing file from one server to another in ubuntu bash | 36,922,507 | 3 | 0 | 1,310 | 0 | python,bash,ubuntu | The rsync command is the right out-of-the-box solution to this problem. From the manpage:
It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
A simple loop of rsync and sleep will do for you. | 0 | 1 | 0 | 0 | 2016-04-28T17:58:00.000 | 2 | 0.291313 | false | 36,922,177 | 0 | 0 | 0 | 1 | I have some csv files which are continuously updated with new entries.
I want to write a script to copy those files to another server which is going to copy continuously without any repeating.
How can I manage to do that with a bash, or python script?
Thanks, |
How are scheduled Python programs typically ran? | 36,923,390 | 1 | 0 | 64 | 0 | python,scheduled-tasks | The answer to this question will likely depend on your platform, the available facilities and your particular project needs.
First let me address system resources. If you want to use the fewest resources, just call time.sleep(NNN), where NNN is the number of seconds until the next instance of 10AM. time.sleep will suspend execution of your program and should consume zero (or virtually zero resources). The python GC may periodically wake up and do maintenance, but it's work should be negligible.
If you're on Unix, cron is the typical facility for scheduling future tasks. It implements a fairly efficient Franta–Maly event list manager. It will determine based on the list of tasks which will occurr next and sleep until then.
On Windows, you have the Schedule Manager. It's a Frankenstein of complexity -- but it's incredibly flexible and can handle running missed events due to power outages and laptop hibernates, etc... | 0 | 1 | 0 | 0 | 2016-04-28T18:42:00.000 | 1 | 1.2 | true | 36,923,007 | 1 | 0 | 0 | 1 | Let's say I want to run some function once a day at 10 am.
Do I simply keep a script running in the background forever?
What if I don't want to keep my laptop open/on for many days at a time?
Will the process eat a lot of CPU?
Are the answers to these questions different if I use cron/launchd vs scheduling programmatically? Thanks! |
Run same python code in two terminals, will them interfere each other? | 36,932,680 | 19 | 15 | 6,112 | 0 | python,ubuntu | If you are writing the output to the same file in disk, then yes, it will be overwritten. However, it seems that you're actually printing to the stdout and then redirect it to a file. So that is not the case here.
Now answer to your question is simple: there is no interaction between two different executions of the same code. When you execute a program or a script OS will load the code to the memory and execute it and subsequent changes to code has nothing to do with the code that is already running. Technically a program that is running is called a process. Also when you run a code on two different terminals there will be two different processes on the OS one for each of them and there is no way for two process to interfere unless you explicitly do that (IPC or inter-process communication) which you are doing here.
So in summary you can run your code simultaneously on different terminals they will be completely independent. | 0 | 1 | 0 | 0 | 2016-04-29T07:39:00.000 | 3 | 1.2 | true | 36,932,420 | 1 | 0 | 0 | 2 | I have a python script which takes a while to finish its executing depending on the passed argument. So if I run them from two terminals with different arguments, do they get their own version of the code? I can't see two .pyc files being generated.
Terminal 1 runs: python prog.py 1000 > out_1000.out
Before the script running on terminal 1 terminate, i start running an another; thus terminal 2 runs: python prog.py 100 > out_100.out
Or basically my question is could they interfere with each other? |
Run same python code in two terminals, will them interfere each other? | 36,932,532 | 3 | 15 | 6,112 | 0 | python,ubuntu | Each Python interpreter process is independent. How the script reacts to itself being run multiple times depends on the exact code in use, but in general they should not interfere. | 0 | 1 | 0 | 0 | 2016-04-29T07:39:00.000 | 3 | 0.197375 | false | 36,932,420 | 1 | 0 | 0 | 2 | I have a python script which takes a while to finish its executing depending on the passed argument. So if I run them from two terminals with different arguments, do they get their own version of the code? I can't see two .pyc files being generated.
Terminal 1 runs: python prog.py 1000 > out_1000.out
Before the script running on terminal 1 terminate, i start running an another; thus terminal 2 runs: python prog.py 100 > out_100.out
Or basically my question is could they interfere with each other? |
How to check if a git repo was updated without using a git command | 36,946,689 | 1 | 0 | 63 | 0 | python,git | Check the .git/FETCH_HEAD for the time stamp and the content.
Every time you fetch content its updating the content and the modification time of the file. | 0 | 1 | 0 | 1 | 2016-04-29T19:40:00.000 | 1 | 1.2 | true | 36,946,288 | 0 | 0 | 0 | 1 | TL;DR
I would like to be able to check if a git repo (located on a shared network) was updated without using a git command. I was thinking checking one of the files located in the .git folder to do so, but I can't find the best file to check. Anyone have a suggestion on how to achieve this?
Why:
The reason why I need to do this is because I have many git repos located on a shared drive. From a python application I built, I synchronize the content of some of these git repo on a local drive on a lot of workstation and render nodes.
I don't want to use git because the git server is not powerful enough to support the amount of requests of all the computers in the studio would need to perform constantly.
This is why I ended up with the solution of putting the repos on the network server and syncing the repo content on a local cache on each computer using rsync
That works fine, but the as time goes by, the repos are getting larger and the rsync is taking too much time to perform. So I would like to be have to (ideally) check one file that would tell me if the local copy is out of sync with the network copy and perform the rsync only when they are out of sync.
Thanks |
Capture IOS logs to a text file with an automated script | 38,962,272 | 0 | 1 | 860 | 0 | python,ios,appium | Install libimobiledevice. Run command idevicesyslog using python and capture the logs. | 0 | 1 | 0 | 0 | 2016-04-30T04:23:00.000 | 3 | 0 | false | 36,950,694 | 0 | 0 | 0 | 3 | I am new to MAC world. The requirement is to capture ios logs to a file and grep it for a ip address. Using Appium and python2.7, is there any way to do it without launching xcode?
Is there any way to automate it?
Any help would be appreciated.
Thanks in Advance!!! |
Capture IOS logs to a text file with an automated script | 36,951,633 | 0 | 1 | 860 | 0 | python,ios,appium | capture app log?
if so that is always automaticlly.
you do not need to launch xcode.
It's not about xcode, it's depend on that where the log files you submit,
and ways you logged | 0 | 1 | 0 | 0 | 2016-04-30T04:23:00.000 | 3 | 0 | false | 36,950,694 | 0 | 0 | 0 | 3 | I am new to MAC world. The requirement is to capture ios logs to a file and grep it for a ip address. Using Appium and python2.7, is there any way to do it without launching xcode?
Is there any way to automate it?
Any help would be appreciated.
Thanks in Advance!!! |
Capture IOS logs to a text file with an automated script | 38,728,406 | 0 | 1 | 860 | 0 | python,ios,appium | Installed Apple configurator 2 on my mac.
ran command /usr/local/bin/cfgutil syslog on cammand line prompt to see the log. | 0 | 1 | 0 | 0 | 2016-04-30T04:23:00.000 | 3 | 0 | false | 36,950,694 | 0 | 0 | 0 | 3 | I am new to MAC world. The requirement is to capture ios logs to a file and grep it for a ip address. Using Appium and python2.7, is there any way to do it without launching xcode?
Is there any way to automate it?
Any help would be appreciated.
Thanks in Advance!!! |
BlueZ DBUS API - GATT interfaces unavailable for BLE device | 36,988,374 | 1 | 1 | 1,158 | 0 | python,linux,python-3.x,dbus,bluez | A system update resolved this problem. | 0 | 1 | 0 | 0 | 2016-04-30T15:03:00.000 | 1 | 1.2 | true | 36,956,477 | 0 | 0 | 0 | 1 | I have a BLE device which has a bunch of GATT services running on it. My goal is to access and read data from the service characteristics on this device from a Linux computer (BlueZ version is 5.37). I have enabled experimental mode - therefore, full GATT support should be available. BlueZ's DBUS API, however, only provides the org.bluez.GattManager1 interface for the connected device, and not the org.bluez.GattCharacteristic1 or org.bluez.GattService1 interfaces which I need. Is there something I'm doing wrong? The device is connected and paired, and really I've just run out of ideas as how to make this work, or what may be wrong.
If it helps, I'm using Python and the DBUS module to interface with BlueZ. |
Is there a way to ssh with qpython? | 42,102,991 | 1 | 1 | 650 | 0 | python,qpython | U need a compiler to build the cryptography module, and it is not included. The best option is to get the cross compiler then build the module yourself. I don't see any prebuilt module for QPython about ssh/paramiko.
Maybe u can try out other libs, busybox/ssh or maybe dropbear for arm.
Update
I've take a proper look at the QPython modules, and both OpenSSL and SSH are preinstalled. You don't need to install them.
Still having problem with the Crypto module. I can't understand how much usefull is the ssh module without the Cryto one ... omg.
Update 2
Tried the Qpypi lib manager, found the cryptography on list, but at time of install didn't found it. Couldn't believe how much difficult is to put ssh to work with QPython. | 0 | 1 | 0 | 1 | 2016-04-30T21:26:00.000 | 2 | 0.099668 | false | 36,960,431 | 0 | 0 | 0 | 1 | I have run into errors trying to pip install fabric, or paramiko (results in a pycrypto install RuntimeError: chmod error).
Is there a way to ssh from within a qpython script? |
How to determine the cause for "BUS-Error" | 37,817,521 | 9 | 9 | 17,329 | 0 | python,linux,embedded | Bus errors are generally caused by applications trying to access memory that hardware cannot physically address. In your case there is a segmentation fault which may cause dereferencing a bad pointer or something similar which leads to accessing a memory address which physically is not addressable. I'd start by root causing the segmentation fault first as the bus error is the secondary symptom. | 0 | 1 | 0 | 1 | 2016-05-01T18:06:00.000 | 2 | 1.2 | true | 36,970,110 | 0 | 0 | 0 | 1 | I'm working on a variscite board with a yocto distribution and python 2.7.3.
I get sometimes a Bus error message from the python interpreter.
My program runs normally at least some hours or days before the error ocours.
But when I get it once, I get it directly when I try to restart my program.
I have to reboot before the system works again.
My program uses only a serial port, a bit usb communication and some tcp sockets.
I can switch to another hardware and get the same problems.
I also used the python selftest with
python -c "from test import testall"
And I get errors for these two tests
test_getattr (test.test_builtin.BuiltinTest) ... ERROR test_nameprep
(test.test_codecs.NameprepTest) ... ERROR
And the selftest stops always at
test_callback_register_double (ctypes.test.test_callbacks.SampleCallbacksTestCase) ... Segmentation
fault
But when the systems runs some hours the selftests stops earlier at
ctypes.macholib.dyld
Bus error
I checked the RAM with memtester, it seems to be okay.
How I can find the cause for the problems? |
Python Script won't run as a scheduled task on Windows 2008 R2 | 37,008,442 | 1 | 0 | 1,159 | 0 | python,windows,scheduled-tasks | Have you tried using:
Action: Start a Program
Program/script: C:\<path to python.exe>\python.exe
Add arguments: C:\\<path to script>\\script.py | 0 | 1 | 0 | 0 | 2016-05-03T14:00:00.000 | 1 | 1.2 | true | 37,006,366 | 1 | 0 | 0 | 1 | I have multiple recurring tasks scheduled to run several times per day to keep some different data stores in sync with one another. The settings for the 'Actions' tab are as follows:
Action: Start a Program
Program/script: C:\<path to script>.py
Add arguments:
Start in: C:\<directory of script>
I can run the python files just fine if I use the command line and navigate to the file location and use python or even just using python without navigating.
For some reason, the scripts just won't run with a scheduled task. I've checked all over and tried various things like making sure the user profile is set correctly and has all of the necessary privileges, which holds true. These scripts have been working for several weeks now with no problems, so something has changed that we aren't able to identify at this time.
Any suggestions? |
How to make a Windows 10 computer go to sleep with a python script? | 40,618,727 | 0 | 5 | 12,969 | 0 | python,windows-10,sleep | set hdd close wait time = 0 in power options | 0 | 1 | 0 | 0 | 2016-05-03T16:43:00.000 | 3 | 0 | false | 37,009,777 | 1 | 0 | 0 | 1 | How can I make a computer sleep with a python script?
It has to be sleep, not hibernate or anything else.
I have tried to use cmd but there is no command to sleep or I didn't find one. |
Shimming pip with pyenv | 61,634,550 | 0 | 4 | 2,224 | 0 | python,pip,pyenv | All of the shim files actually seem to be identical, so you should always be able to do cp ~/.pyenv/shims/{python,pip}. | 0 | 1 | 0 | 0 | 2016-05-04T21:10:00.000 | 1 | 0 | false | 37,038,081 | 0 | 0 | 0 | 1 | How do I get a pip shim in ~/.pyenv/shims. I am using pyenv, but which pip still shows the system version of pip.
Based on the below comment copied from the docs, it appears it should occur through rehashing, but I have run pyenv rehash and nothing happen happens
Copied from docs:
Through a process called rehashing, pyenv maintains shims in that directory to match every Python command across every installed version of Python—python, pip, and so on.
Per request in comments here is my PATH
/Users/patrick/google-cloud-sdk/bin:/Users/patrick/.pyenv/shims:/Users/patrick/.pyenv/bin:/Users/patrick/.local/bin:/Users/patrick/npm/bin:/Users/patrick/google_appengine:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/mongodb/bin
and here is my ~/.pyenv/shims content:
$ ll ~/.pyenv/shims/
total 80
drwxr-xr-x 12 patrick staff 408 May 23 09:49 .
drwxr-xr-x 22 patrick staff 748 May 4 18:10 ..
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 2to3
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 idle
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 pydoc
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 python
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 python-config
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 python2
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 python2-config
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 python2.7
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 python2.7-config
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 smtpd.py |
Python versions in worker node and master node vary | 37,040,617 | 1 | 0 | 930 | 0 | python-2.7,apache-spark,apache-spark-1.4 | Did you restart the Spark workers with the new setting? Changing the environment setting just for your driver process is not enough: tasks created by the driver will cross process, sometimes system, boundaries to be executed. Those tasks are compiled bits of code, so that is why both versions need to match. | 0 | 1 | 0 | 0 | 2016-05-05T00:58:00.000 | 1 | 1.2 | true | 37,040,580 | 1 | 0 | 0 | 1 | Running spark 1.4.1 on CentOS 6.7. Have both python 2.7 and python 3.5.1 installed on it with anaconda.
MAde sure that PYSPARK_PYTHON env var is set to python3.5 but when I open pyspark shell and execute a simple rdd transformation, it errors out with below exception:
Exception: Python in worker has different version 2.7 than that in driver 3.5, PySpark cannot run with different minor versions
Just wondering what are the other places to change the path. |
gcloud preview app deploy uploads all souce code files everytime in python project taking long time | 37,061,546 | 1 | 2 | 1,415 | 0 | python,git,google-app-engine,gcloud-python,google-cloud-python | Yes, this is the expected behaviour, each deployment is standalone, no assumption is made about anything being "already deployed", all app's artifacts are uploaded at every deployment.
Update: Kekito's comment suggests different tools may actually behave differently. My answer applies to the linux version of the Python SDK, regardless of deploying a new version or re-deploying the same version. | 0 | 1 | 0 | 0 | 2016-05-05T06:02:00.000 | 1 | 0.197375 | false | 37,043,493 | 0 | 0 | 1 | 1 | After i recently updated the gcloud components with gcloud components update to version 108.0.0, i noticed the gcloud preview app deploy app.yaml command has started taking too long every time (about 15 minutes) for my project. Before this it only used to take about a minute to complete.
I figured out that using gcloud preview app deploy --verbosity info app.yaml displays progress of deployment process and I noticed every file in source code is being uploaded every time i deploy including the files in lib directory which has a number of packages installed, about 2000 files in it so this is where the delay is coming from. Since I am new to appengine, i dont know if this is normal.
The project exists inside a folder of git repo, and i noticed after every deploy, 2 files in default directory, source-context.json and source-contexts.json, are being created and have information about git repo inside. I feel that can somehow be relevant.
I went through a number of relevant questions here but couldnt figure out the issue. It would be great if this can be resolved if its an issue at all because its a big inconvenience having to wait 15 mins to deploy every time.
I only started using google appengine a month ago so please dont mind if the question is incorrect. Please let me know if additional info is needed to resolve this. Thanks
UPDATE: I am using gcloud sdk on ubuntu 14.04 LTS. |
How to switch to a different python version in Atom? | 37,202,373 | 2 | 3 | 2,292 | 0 | python,interpreter,atom-editor | Finally, find the solution. In folder of '/Library/Frameworks/Python.framework/Versions/3.5/bin/', python is named as python3.5. So all I need is deleting '3.5' and use 'python' along. | 0 | 1 | 0 | 0 | 2016-05-05T21:02:00.000 | 1 | 0.379949 | false | 37,060,219 | 1 | 0 | 0 | 1 | I need to use Python 3.5 instead of 2.7. But I cannot find any 'run options' or 'interpreter configurations' in Atom. My current interpreter is Python 2.7 in '/Library/Frameworks/Python.framework/Versions/2.7/bin/python'. I have installed 3.5 which is in '/Library/Frameworks/Python.framework/Versions/3.5/bin/python'.
Besides, I am using Mac OSX.
Thanks in advance! |
How to use pg_restore on Windows Command Line? | 37,104,332 | 5 | 4 | 10,221 | 1 | python,django,database,postgresql,heroku | Since you're on windows, you probably just don't have pg_restore on your path.
You can find pg_restore in the bin of your postgresql installation e.g. c:\program files\PostgreSQL\9.5\bin.
You can navigate to the correct location or simply add the location to your path so you won't need to navigate always. | 0 | 1 | 0 | 0 | 2016-05-08T19:55:00.000 | 1 | 0.761594 | false | 37,104,193 | 0 | 0 | 1 | 1 | I have downloaded a PG database backup from my Heroku App, it's in my repository folder as latest.dump
I have installed postgres locally, but I can't use pg_restore on the windows command line, I need to run this command:
pg_restore --verbose --clean --no-acl --no-owner -j 2 -h localhost -d DBNAME latest.dump
But the command is not found! |
Where does Anaconda Python install on Windows? | 56,714,055 | 6 | 99 | 246,046 | 0 | python,windows,anaconda,pydev | where conda
F:\Users\christos\Anaconda3\Library\bin\conda.bat
F:\Users\christos\Anaconda3\Scripts\conda.exe
F:\Users\christos\Anaconda3\condabin\conda.bat
F:\Users\christos\Anaconda3\Scripts\conda.exe --version
conda 4.6.11
this worked for me | 0 | 1 | 0 | 0 | 2016-05-09T13:52:00.000 | 11 | 1 | false | 37,117,571 | 1 | 0 | 0 | 2 | I installed Anaconda for Python 2.7 on my Windows machine and wanted to add the Anaconda interpreter to PyDev, but quick googling couldn't find the default place where Anaconda installed, and searching SO didn't turn up anything useful, so.
Where does Anaconda 4.0 install on Windows 7? |
Where does Anaconda Python install on Windows? | 51,557,560 | 2 | 99 | 246,046 | 0 | python,windows,anaconda,pydev | With Anaconda prompt python is available, but on any other command window, python is an unknown program. Apparently Anaconda installation does not update the path for python executable. | 0 | 1 | 0 | 0 | 2016-05-09T13:52:00.000 | 11 | 0.036348 | false | 37,117,571 | 1 | 0 | 0 | 2 | I installed Anaconda for Python 2.7 on my Windows machine and wanted to add the Anaconda interpreter to PyDev, but quick googling couldn't find the default place where Anaconda installed, and searching SO didn't turn up anything useful, so.
Where does Anaconda 4.0 install on Windows 7? |
Python script to detect USBs in java GUI | 37,133,845 | 0 | 0 | 125 | 0 | java,python,javafx,automation,libusb | No, in most (Windows) scenarios this will not work. The problem is that libusb on Windows uses a special backend (libusb0.sys, libusbK.sys or winusb.sys). You have to install one of those backends (libusb-win32 is libusb0.sys) on every machine you want your software to run on. Under Linux this should work fine out of the box.
Essentially you have to ship the files you generate with inf_wizard.exe with your software and install the inf (needs elevated privileges) before you can use the device with your software. | 0 | 1 | 0 | 0 | 2016-05-09T22:28:00.000 | 1 | 0 | false | 37,126,446 | 0 | 0 | 1 | 1 | I'm making a java GUI application (javafx) that calls a python script (python2.7) which detects connected devices. The reason for this is so I can automate my connections with multiple devices.
In my python script, I use pyusb. However to detect a device, I have to use inf_wizard.exe from libusb-win32 to communicate with the device. This is fine for my own development and debugging, but what happens if I wish to deploy this app and have other users use this?
Would this app, on another computer, be able to detect a device?
Thanks
Please let me know if there is a better way to doing this. |
#python3.x Importing in Shell | 43,708,606 | 0 | 0 | 46 | 0 | shell,python-3.x,import | You have to be in the directory in which the file is located in order to import it from another script or from the interactive shell.
So you should either put the script trying to import ch09 in the same folder as ch09.py or you should use os.chdir to cd to the directory internally. | 0 | 1 | 0 | 1 | 2016-05-12T11:55:00.000 | 1 | 0 | false | 37,186,159 | 1 | 0 | 0 | 1 | I looked around the web, couldn't really find, guess I'm searching wrong.
I try to import a file I built.
In cmd to use it I used a cd command and and just used it.
In shell it keeps on telling me:
[ Traceback (most recent call last): File "", line 1, in
from ch09 import * ImportError: No module named 'ch09' ]
(Im just learning python my self hence ch09)
please if someone can help me with this, even both in cmd not to use cd, though it fine, but more important in shell).
Thanks, Josh. |
Ubuntu, how to install OpenCV for python3? | 37,188,746 | 1 | 27 | 54,785 | 0 | python,opencv,ubuntu | This is because you have multiple installations of python in your machine.You should make python3 the default, because by default is the python2.7 | 0 | 1 | 0 | 0 | 2016-05-12T13:37:00.000 | 9 | 0.022219 | false | 37,188,623 | 1 | 0 | 0 | 1 | I want to install OpenCV for python3 in ubuntu 16.04. Fist I tried running sudo apt-get install python3-opencv which is how I pretty much install all of my python software. This could not find a repository. The install does work however if I do sudo apt-get install python-opencv this issue with this is that by not adding the three to python it installs for python 2 which I do not use. I would really perfer not to have to build and install from source so is there a way I can get a repository? I also tried installing it with pip3 and it could not find it either. |
How to run celery on windows? | 54,003,787 | 7 | 42 | 46,000 | 0 | python,celery | I have run celery task using RabbitMQ server.
RabbitMq is better and simple than redis broker
while running celery use this command "celery -A project-name worker --pool=solo -l info"
and avoid this command "celery -A project-name worker --loglevel info" | 0 | 1 | 0 | 0 | 2016-05-16T13:49:00.000 | 12 | 1 | false | 37,255,548 | 0 | 0 | 0 | 5 | How to run celery worker on Windows without creating Windows Service? Is there any analogy to $ celery -A your_application worker? |
How to run celery on windows? | 64,753,882 | 7 | 42 | 46,000 | 0 | python,celery | Compile Celery with --pool=solo argument.
Example:
celery -A your-application worker -l info --pool=solo | 0 | 1 | 0 | 0 | 2016-05-16T13:49:00.000 | 12 | 1 | false | 37,255,548 | 0 | 0 | 0 | 5 | How to run celery worker on Windows without creating Windows Service? Is there any analogy to $ celery -A your_application worker? |
How to run celery on windows? | 37,277,253 | 5 | 42 | 46,000 | 0 | python,celery | It's done the same way as in Linux. Changing directory to module containing celery task and calling "c:\python\python" -m celery -A module.celery worker worked well. | 0 | 1 | 0 | 0 | 2016-05-16T13:49:00.000 | 12 | 1.2 | true | 37,255,548 | 0 | 0 | 0 | 5 | How to run celery worker on Windows without creating Windows Service? Is there any analogy to $ celery -A your_application worker? |
How to run celery on windows? | 60,682,458 | 4 | 42 | 46,000 | 0 | python,celery | You can still use celery 4 0+ with Windows 10+
Just use this command "celery -A projet worker - -pool=solo - l info" instead of "celery - A project worker -l info | 0 | 1 | 0 | 0 | 2016-05-16T13:49:00.000 | 12 | 0.066568 | false | 37,255,548 | 0 | 0 | 0 | 5 | How to run celery worker on Windows without creating Windows Service? Is there any analogy to $ celery -A your_application worker? |
How to run celery on windows? | 68,523,385 | 3 | 42 | 46,000 | 0 | python,celery | You can run celery on windows without an extra library by using threads
celery -A your_application worker -P threads | 0 | 1 | 0 | 0 | 2016-05-16T13:49:00.000 | 12 | 0.049958 | false | 37,255,548 | 0 | 0 | 0 | 5 | How to run celery worker on Windows without creating Windows Service? Is there any analogy to $ celery -A your_application worker? |
django-admin command not working in Mac OS | 37,266,854 | 0 | 4 | 13,509 | 0 | python,django | You need to add django to your path variables and then restart the terminal. | 0 | 1 | 0 | 0 | 2016-05-16T15:51:00.000 | 6 | 0 | false | 37,258,045 | 0 | 0 | 1 | 3 | I started Django in Mac OS and after installing Django using pip, I tried to initiated a new project using the command django-admin startproject mysite. I get the error -bash: django-admin: command not found. I make quick search in Google and haven't get any solution that works.
How to start a new project using Django using django-admin ? |
django-admin command not working in Mac OS | 48,470,351 | 0 | 4 | 13,509 | 0 | python,django | I know I'm jumping in a little late, but my installations seem to all reside away from /usr/local/bin/... . What worked for me was adding an export path in bash_profile for my django installation.
This also made me realize that it was installed globally. From what I've heard, it's better to install django locally within venv as you work on different projects. That way each virtual environment can contain its own versions and dependencies for django (and whatever else you're using). Big thanks to @Arefe. | 0 | 1 | 0 | 0 | 2016-05-16T15:51:00.000 | 6 | 0 | false | 37,258,045 | 0 | 0 | 1 | 3 | I started Django in Mac OS and after installing Django using pip, I tried to initiated a new project using the command django-admin startproject mysite. I get the error -bash: django-admin: command not found. I make quick search in Google and haven't get any solution that works.
How to start a new project using Django using django-admin ? |
django-admin command not working in Mac OS | 37,266,721 | 6 | 4 | 13,509 | 0 | python,django | I solved the issue after reading a webpage about the mentioned issue.
In the Python shell, write the following,
>> import django
>> django.__file__
>> django also works
It will provide the installation location of django.
Change the path to the new path /usr/local/bin/django-admin.py,
sudo ln -s the complete path of django-admin.py /usr/local/bin/django-admin.py
In Mac OS, The call needs to be django-admin.py startproject mysite than django-admin startproject mysite | 0 | 1 | 0 | 0 | 2016-05-16T15:51:00.000 | 6 | 1 | false | 37,258,045 | 0 | 0 | 1 | 3 | I started Django in Mac OS and after installing Django using pip, I tried to initiated a new project using the command django-admin startproject mysite. I get the error -bash: django-admin: command not found. I make quick search in Google and haven't get any solution that works.
How to start a new project using Django using django-admin ? |
App engine vs Compute engine : django project | 37,267,336 | 4 | 2 | 314 | 0 | python,django,google-app-engine,google-compute-engine | DigitalOcean is IaaS (infrastructure as a service). I guess the corresponding offer from Google is Google Compute Engine, GCE.
Google App Engine is more like Heroku, a PaaS offer (platform as a service).
In practice, what is the difference between PaaS and IaaS?
IaaS: if you have a competent system administrator in your team he probably will choose IaaS - because this kind of service will give him more control at the cost of more decisions and setup - but that is his work.
PaaS: if you are willing to pay more (like double it) to avoid most of the management work and don't mind a more opinionated platform, than a PaaS may be the right product for you. You are a programmer and just want to deploy your code (and you are happy to pay an extra in order to avoid dealing with those dickheads in operations).
Probably you can find a more elegant comparison if you google for it. | 0 | 1 | 0 | 0 | 2016-05-17T03:13:00.000 | 1 | 1.2 | true | 37,266,479 | 0 | 0 | 1 | 1 | I am trying to transfer my current server from DO to GCP but not sure what to use between App engine and Compute engine.
Currently using:
django 1.8
postgres (connected using psycopg2)
python 2.7
Thanks in advance! |
How to get bandwidth from ifconfig eth0 on win7 | 37,288,046 | 0 | 0 | 205 | 0 | python-2.7,windows-7,bandwidth,ifconfig | problem solved C:\windows\system32\ipconfig on cmd | 0 | 1 | 0 | 0 | 2016-05-17T08:16:00.000 | 1 | 0 | false | 37,270,679 | 0 | 0 | 0 | 1 | i'm still confusing find bandwidth value using ifconfig eth0 , because it's very different if i see tutorial from os linux but my pc still os window 7 ..
Is there any other way to find bandwidth value in window 7 ?
because i must find byte from ifconfig for convert to Kbps |
Does Google App Engine keep code snapshots of past deployments? | 37,287,718 | 1 | 0 | 26 | 0 | python,google-app-engine | You can only see which version of your app was deployed and when - unless you deleted the older version. | 0 | 1 | 0 | 0 | 2016-05-17T23:09:00.000 | 2 | 1.2 | true | 37,287,631 | 0 | 0 | 1 | 1 | I've deployed code changes to a GAE app under development and broken the app. My IDE isn't tracking history to the point in time where things still worked, and I didn't commit my code to a repo as often as I updated the app, so I can't be sure what the state of the deployed code was at the point in time when it was working, though I do know a date when it was working. Is there a way to either:
Rollback the app to a specific date?
See what code was deployed at a specific deployment or date?
I see that deployments are logged - I'm hoping that GAE keeps a copy of code for each deployment allowing me to at least see the code or diffs. |
Can I generate a python executable file on my Mac that can be used on Windows | 37,292,590 | 3 | 2 | 2,758 | 0 | python,windows,macos,wxpython,cross-platform | You can not generate Windows executable files on OS X. You must use the platform which you want the program to run on to compile the program. If you own a copy of Windows, you could run a virtual machine on you Mac and compile it on that. | 0 | 1 | 0 | 0 | 2016-05-18T07:01:00.000 | 3 | 1.2 | true | 37,292,381 | 1 | 0 | 0 | 1 | I am programming python with my Mac laptop, however, the final executable will be run on final users' Windows system. And the Windows system hasn't set up the python environment specifically.
Can I generate an executable file on my Mac laptop, and Windows user can directly run on Windows system?
I looked at py2exe, but it seems it must build the python on Windows in order to run the exe on Windows. |
OSX El Capitan python install cryptography fail | 41,800,751 | 0 | 0 | 930 | 0 | python,macos | To install cryptography package on Mac OS El Capitan. As explained in Cryptography installation doc
env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" pip install cryptography | 0 | 1 | 0 | 0 | 2016-05-18T11:52:00.000 | 3 | 0 | false | 37,298,803 | 0 | 0 | 0 | 1 | I tried
sudo pip install cryptography
And the error message is
Collecting cryptography
Using cached cryptography-1.3.2-cp27-none-macosx_10_6_intel.whl
Requirement already satisfied (use --upgrade to upgrade): cffi>=1.4.1 in /Library/Python/2.7/site-packages (from cryptography)
Requirement already satisfied (use --upgrade to upgrade): pyasn1>=0.1.8 in /Library/Python/2.7/site-packages (from cryptography)
Collecting setuptools>=11.3 (from cryptography)
Using cached setuptools-21.0.0-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): six>=1.4.1 in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from cryptography)
Requirement already satisfied (use --upgrade to upgrade): idna>=2.0 in /Library/Python/2.7/site-packages (from cryptography)
Requirement already satisfied (use --upgrade to upgrade): ipaddress in /Library/Python/2.7/site-packages (from cryptography)
Requirement already satisfied (use --upgrade to upgrade): enum34 in /Library/Python/2.7/site-packages (from cryptography)
Requirement already satisfied (use --upgrade to upgrade): pycparser in /Library/Python/2.7/site-packages (from cffi>=1.4.1->cryptography)
Installing collected packages: setuptools, cryptography
Found existing installation: setuptools 1.1.6
Uninstalling setuptools-1.1.6:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 736, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 742, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip/utils/init.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 299, in move
copytree(src, real_dst, symlinks=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 208, in copytree
raise Error, errors
Error: [('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py', '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py', "[Errno 1] Operation not permitted: '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc', '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', "[Errno 1] Operation not permitted: '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', "[Errno 1] Operation not permitted: '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib'")]
Then I searched some post and tried
brew install pkg-config libffi openssl
Warning: pkg-config-0.28 already installed
Warning: libffi-3.0.13 already installed
Warning: openssl-1.0.2d_1 already installed
and
CFLAGS="-I/usr/local/opt/openssl/include" sudo pip install cryptography==0.8
I got this error message:
src/cryptography/hazmat/bindings/pycache/_Cryptography_cffi_f3e4673fx399b1113.c:217:10: fatal error: 'openssl/aes.h' file not found
#include
^
1 error generated.
error: command 'cc' failed with exit status 1
Command "/usr/bin/python -u -c "import setuptools, tokenize;file='/private/tmp/pip-build-MxT6op/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record /tmp/pip-G6b8Y_-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-build-MxT6op/cryptography/
I also tried
brew install pkg-config libffi openssl
env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" pip install cryptography
and got this
Found existing installation: setuptools 1.1.6
Uninstalling setuptools-1.1.6:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 736, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 742, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip/utils/init.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 299, in move
copytree(src, real_dst, symlinks=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 208, in copytree
raise Error, errors
Error: [('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py', '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py', "[Errno 1] Operation not permitted: '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc', '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', "[Errno 1] Operation not permitted: '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', "[Errno 1] Operation not permitted: '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib'")]
Please help me to get this fixed. Thanks a lot. |
Deploying app to Google App Engine (python) | 44,471,398 | 0 | 0 | 52 | 0 | google-app-engine-python | create the project in your google app engine account by specifiying the
Application identifier and title(say you have given your application identifier
is=helloworld)
Go to the google app engine launcher and match your project name in the
app.yaml
with the name identifier you created in your google app engine account and then deploy it. | 0 | 1 | 0 | 0 | 2016-05-18T19:44:00.000 | 2 | 0 | false | 37,308,794 | 0 | 0 | 1 | 1 | I'm new to this and trying to deploy a first app to the app engine. However, when i try to i get this message:
"This application does not exist (app_id=u'udacity')."
I fear it might have to do with the app.yaml file so i'll just leave here what i have there:
application: udacity
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /favicon.ico
static_files: favicon.ico
upload: favicon.ico
url: /.*
script: main.app
libraries:
- name: webapp2
version: "2.5.2"
Thanks in advance. |
'From/import' is not recognized as an internal or external command, operable program or batch file | 68,398,311 | 0 | 2 | 39,591 | 0 | python,machine-learning,scikit-learn,windows-10,anaconda | I had the same issue. Internal or External command can be solved but just 3 steps:
(Before doing make sure you undo all hidden apps)
Select the path of the scripts which are there in your local computer (eg. C drive--> users --> (a file with your name) --> AppData --> Local --> then while you go forward you find Scripts.
Copy that Path.
Open edit the system environment variables--> go to path --> add new --> paste the copied Scripts path and click ok. Close all the prompts and run again.
It worked for me | 0 | 1 | 0 | 0 | 2016-05-18T23:57:00.000 | 3 | 0 | false | 37,311,877 | 1 | 0 | 0 | 2 | I'm having trouble importing Machine Learning algorithms from scikit-learn.
I have it installed but whenever I type for example "from sklearn.naive_bayes import GaussianNB" it says " 'from' is not recognized as an internal or external command, operable program or batch file.
I'm using Anaconda on Windows 10. Is it compatibility issue? Am I missing something? Idk I'm still new to Python so I feel lost. Thanks |
'From/import' is not recognized as an internal or external command, operable program or batch file | 37,311,900 | 3 | 2 | 39,591 | 0 | python,machine-learning,scikit-learn,windows-10,anaconda | That needs to be run in the Python REPL, not at a command line. Be sure to start one before typing Python statements. | 0 | 1 | 0 | 0 | 2016-05-18T23:57:00.000 | 3 | 0.197375 | false | 37,311,877 | 1 | 0 | 0 | 2 | I'm having trouble importing Machine Learning algorithms from scikit-learn.
I have it installed but whenever I type for example "from sklearn.naive_bayes import GaussianNB" it says " 'from' is not recognized as an internal or external command, operable program or batch file.
I'm using Anaconda on Windows 10. Is it compatibility issue? Am I missing something? Idk I'm still new to Python so I feel lost. Thanks |
How to configure the Jenkins ShiningPanda plugin Python Installations | 37,616,786 | 3 | 7 | 4,515 | 0 | python,plugins,jenkins | As far as my experiments for jenkins and python goes, shining panda plug-in doesn't install python in slave machines in fact it uses the existing python library set in the jenkins configuration to run python commands.
In order to install python on slaves, I would recommend to use python virtual environment which comes along with shining panda and allows to run the python commands and then close the virtual environment. | 0 | 1 | 0 | 1 | 2016-05-19T16:12:00.000 | 2 | 1.2 | true | 37,328,773 | 0 | 0 | 0 | 1 | The Jenkins ShiningPanda plugin provides a Managers Jenkins - Configure System setting for Python installations... which includes the ability to Install automatically. This should allow me to automatically setup Python on my slaves.
But I'm having trouble figuring out how to use it. When I use the Add Installer drop down it gives me the ability to
Extract .zip/.tar.gz
Run Batch Command
Run Shell Command
But I can't figure out how people us these options to install Python. Especially as I need to install Python on Windows, Mac, & Linux.
Other Plugins like Ant provide an Ant installations... which installs Ant automatically. Is this possible with Python? |
Jupyterhub terminal path variable | 37,334,386 | 2 | 2 | 600 | 0 | ipython,jupyter,jupyter-notebook,jupyterhub,jupyter-console | Turns out that when you edit /etc/bash.bashrc on the machine to include the PATH variable, it then persists in the Jupyterhub terminal | 0 | 1 | 0 | 0 | 2016-05-19T21:18:00.000 | 1 | 0.379949 | false | 37,334,162 | 0 | 0 | 0 | 1 | When I open a terminal on JupyterHub and try to run commands that I've placed in the machine's path, it says "command not found."
The default PATH variable on Jupyterhub terminal seems to be /sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin and I'm wondering why it doesn't use the PATH as defined on the machine. Is there a way to get it to inherit this PATH, or to run some command like source /etc/environment whenever a terminal is generated? |
Python IDLE giving startup error | 48,363,806 | 0 | 0 | 566 | 0 | python-2.7,kivy,startup,python-idle | reason for subprocess startup error can be a new python file that you have created recently with the same name as same existing libraries or module name
e.g. 're.py','os.py' etc
beacuse re,os are the predefined libraries
just go and find the file and rename it
hope so it will be resolved | 1 | 1 | 0 | 0 | 2016-05-20T12:32:00.000 | 1 | 0 | false | 37,346,844 | 0 | 0 | 0 | 1 | I have just installed kivy in python 2.7.11. After installing it, whenever i try to open IDLE, it is giving subprocess startup error.
Actually i installed kivy on my windows 7 PC through command prompt. After installation, I copied programs of kivy from my android tab to run them on my pc. When I tried to open them, IDLE doesn't respond for some time and after some time it gives startup error. Since then IDLE is not starting .
But it is quite strange that, on running python builtin module, there is no error.
I had reinstalled python but still there is no change. |
Can I tell where my django app is mounted in the url hierarchy? | 37,352,987 | 0 | 1 | 33 | 0 | python,django,url-routing | After doing more research and talking with coworkers, I realized that reverse does exactly what I need. | 0 | 1 | 0 | 0 | 2016-05-20T16:15:00.000 | 1 | 0 | false | 37,351,300 | 0 | 0 | 1 | 1 | I need to redirect my clients to another endpoint in my django app. I know I can use relative urls with request.build_absolute_uri() to do this, but I am searching for a generic solution that doesn't require the redirecting handler to know its own place in the URL hierarchy.
As an example, I have handlers at the following two URLs:
https://example.com/some/other/namespace/MY_APP/endpoint_one
https://example.com/some/other/namespace/MY_APP/foo/bar/endpoint_two
Both handlers need to redirect to this URL:
https://example.com/some/other/namespace/MY_APP/baz/destination_endpoint
I would like for endpoint_one and endpoint_two to both be able to use the exact same logic to redirect to destination_endpoint.
My app has no knowledge of the /some/other/namespaces/ part of the URL, and that part of the URL can change depending on the deployment (or might not be there at all in a development environment).
I know I could use different relative urls from each endpoint, and redirect to the destination URL. However, that required that the handlers for endpoint_one and endpoint_two know their relative position in the URL hierarchy, which is something I am trying to avoid. |
asyncio logging for cluster app | 37,386,212 | 1 | 0 | 382 | 0 | python-3.x,logging,cluster-computing | A simple solution is to ask for logging to log on syslog (typically /dev/log, which won't block your application), locally (so your application is not bound to your logging system: it's still portable), then let rsyslog (I prefer syslogng personally) transmit them to a main log server.
Another solution is to use a tool like logstash to push your logs to an elasticsearch server / cluster so you can browse and graph them easily. In this case, if your log lines are json objects, it's a big win because elasticsearch-side (typically via kibana), you'll be able to query, filter, and aggregate on fields of your json documents. Typically graphing info vs warnings vs errors, frequency of errors per file, or per user, etc... | 0 | 1 | 0 | 0 | 2016-05-20T17:14:00.000 | 1 | 1.2 | true | 37,352,294 | 1 | 0 | 0 | 1 | Need advice on the organization of logging in a clustered application written in Python(asyncio). Applications use the logging module and store logs local file.
See the logs with 3 servers uncomfortable.
I would like to use rsyslog, but there are fears that it will block the application. Another way to use aioredis(push to channel) and another application to collect data in a single file. |
I need to run a python script in IDLE from a BAT file | 37,354,377 | 2 | 0 | 1,630 | 0 | python,python-2.7,batch-file,ipython,python-idle | Why not run the script directly from the command line using "python script_file.py" in your command line? | 0 | 1 | 0 | 0 | 2016-05-20T18:51:00.000 | 3 | 1.2 | true | 37,353,915 | 1 | 0 | 0 | 1 | I have a python script which creates some images. The script runs in Ipython IDLE as expected, however then I call IDLE from the cmd line, and include the name of the script, the script loads but does not execute. If I hit F5(run Module) then the program runs, but I wonder if it is possible to make the script run without having to press F5. |
Does Linux pymssql use ODBC? | 37,366,489 | 0 | 0 | 184 | 1 | python,pymssql | No, pymssql does not use ODBC. | 0 | 1 | 0 | 0 | 2016-05-20T23:34:00.000 | 1 | 0 | false | 37,357,318 | 0 | 0 | 0 | 1 | I want to use pymssql on a 24/7 Linux production app and am worried about stability. As soon as I hear ODBC I start to have reservations, especially on Linux.
Does pymssql use ODBC or is it straight to freeTDS? |
The term 'python' is not recognized as the name of a cmdlet | 47,179,582 | 1 | 1 | 4,380 | 0 | python,installation | Type .\python.exe .\
PS C:\Python27> .\python.exe .\PRACTICE.py
Typing only python will not get recognized by windows. | 0 | 1 | 0 | 1 | 2016-05-21T10:54:00.000 | 1 | 0.197375 | false | 37,361,976 | 1 | 0 | 0 | 1 | I've installed python, and added it's path to system variables "C:\Python27" but when typing "python" to powershell, I get error mentioned in title. I also can't run it from cmd.
And yes, my python folder is in c directory. |
Are there any disadvantages in using a Makefile.am over setup.py? | 37,369,042 | 3 | 2 | 580 | 0 | python,c++,autotools,automake,distutils | Based on your description, I would suggest that you have your project built using stock autotools-generated configure and Makefile, i.e. autoconf and automake, and have either your configure or your Makefile take care of executing your setup.py, in order to set up your Python bits.
I have a project that's mostly C/C++ code, together with a Perl module. This is very similar to what you're trying to do, except that it's Perl instead of Python.
In my Makefile (generated from Makefile.am) I have a target that executes the Perl module's Makefile.PL, which is analogous to Python's setup.py, and in that manner I build the Perl module together with the rest of the C++ code, seamlessly together, as a single build. Works fairly well.
automake's Makefile.am is very open-ended and flexible, and can be easily adapted and extended to incorporate foreign bits, like these. | 0 | 1 | 0 | 1 | 2016-05-21T22:00:00.000 | 1 | 1.2 | true | 37,368,441 | 1 | 0 | 0 | 1 | I'm working on transitioning a project from scons to autotools, since I it seems to automatically generate a lot of features that are annoying to write in a SConscript (e.g. make uninstall).
The project is mostly c++ based, but also includes some modules that have been written in python. After a lot of reading autotools, I can finally create a shared library, compile and link an executable against it, and install c++ header files. Lovely. Now comes they python part. By including AM_PYTHON_PATH in configure.ac, I'm also installing the python modules with Makefile.am files such as
autopy_PYTHON=autopy/__init__.py autopy/noindent.py autopy/auto.py
submoda_PYTHON=autopy/submoda/moda.py autopy/submoda/modb.py autopy/submoda/modc.py autopy/submoda/__init__.py
submodb_PYTHON=autopy/submodb/moda.py autopy/submodb/modb.py autopy/submodb/modc.py autopy/submodb/__init__.py
autopydir=$(pythondir)/autopy
submodadir=$(pythondir)/submoda
submodbdir=$(pythondir)/submodb
dist_bin_SCRIPTS=scripts/script1 scripts/script2 scripts/script3
This seems to place all my modules and scripts in the appropriate locations, but I wonder if this is "correct", because the way to install python modules seems to be through a setup.py script via distutils. I do have setup.py scripts inside the python modules and scons was invoking them until I marched in with autotools. Is one method preferred over the other? Should I still be using setup.py when I build with autotools? I'd like to understand how people usually resolve builds with c++ and python modules using autotools. I've got plenty of other autotools questions, but I'll save those for later. |
pip works for python2.7 but not 3.5 | 37,375,107 | 3 | 1 | 800 | 0 | python,linux,pip,fedora,pyperclip | you can use python3 -m pip install pyperclip | 0 | 1 | 0 | 0 | 2016-05-22T13:32:00.000 | 1 | 1.2 | true | 37,375,019 | 1 | 0 | 0 | 1 | I'll start by saying I am a complete novice and am likely overlooking something obvious. Don't assume I have any idea about anything related to linux or python.
Anyway, I installed python 3.5 onto my computer which runs Fedora 23. Fedora comes prepackaged with 2.7. When I installed 3.5, I somehow installed it into my /home/user/Documents directory. I since deleted that rm -r -f /home/user/Documents/Python-3.5.1 directory. Yet I can still open 3.5 when I type python3. Originally I created an alias to point to the python command in the home/user/Documents/Python-3.5.1 directory, so being able to still open 3.5 after deleting that directory and removing the alias is confusing, and must mean I had two python 3.5 installs. That's some backstory that isn't really my problem, but maybe it's related.
The issue I'm having is that I cannot install a module that I want to import for use in a Python 3.5 program.
When I type pip install pyperclip (I'm working through AutomateTheBoringStuff) pyperclip is installed for 2.7. If I open the python2.7 command line and type import pyperclip everything is fine, but if I try the same thing in the python3.5 command line I get an error saying the module does not exist.
I assume this is because pip installs the pyperclip module to the subdirectories associated with 2.7. How can I install modules for 3.5 using pip? |
Correct architectual way to send "datagrams" via TCP | 37,380,443 | 2 | 0 | 54 | 0 | python,sockets,tcp,packet,datagram | Since TCP is only an octet stream this is not possible without glue, either around your data (i.e. framing) or inside your data (structure with clear end).
The way this is typically done is either by having a delimiter (like \r\n\r\n between HTTP header and body) or just prefix your message with the size. In the latter case just read the size (fixed number of bytes) and then read this number of bytes for the actual message. | 0 | 1 | 1 | 0 | 2016-05-22T21:21:00.000 | 1 | 1.2 | true | 37,379,793 | 0 | 0 | 0 | 1 | I need to transmit full byte packets in my custom format via TCP. But if I understand correctly TCP is streaming protocol, so when I will call send method on sender side, there is not guaranty that it will be received with same size on receiver side when call recv (It can be merged together with Nagle's algorithm and then splited when will not fit into frame or when not fit to buffer).
UDP provides full datagrams so there is no such issue.
So question is: what will be the best and correct way to recv same pacakges as it was send, with same size, with no glue. I develop using python.
I think I can use something like HDLC but I am not sure that iterating throug each byte will be best choice.
Maybe there are some open-source small examples for this situation or it is discribed in books? |
Celery PeriodicTask per user | 37,637,827 | 1 | 2 | 341 | 0 | python,django,celery | periodic tasks scheduler in celery is not designed to handle thousands of scheduled tasks, so from performance perspective, much better solution is to have one task that is running at the smallest interval (e.g. if you allow user to sechedule dayly, weekly, monthly - running task daily is enough)
such approach is as well more stable - every time schedule changes, all of the schedule records are reloaded
plus is more secure because you do not expose or use any internal mechanisms for tasks execution | 0 | 1 | 0 | 0 | 2016-05-25T13:34:00.000 | 1 | 0.197375 | false | 37,438,867 | 0 | 0 | 1 | 1 | I'm working on project which main future will be running periodically one type of async task for each user. Every user will be able to configure task (running daily, weekly etc. at specified time). Also task will use some data stored by user. Now I'm wondering which approach should be better: allow users to create own PeriodicTask (by using some restricted endpoint of course) or create single PeriodicTask (for example running every 5 minutes) which will iterate over all users and determine if task should be queued or not for current user? I think I will use AMPQ as broker. |
Subprocess Hanging on Wait | 37,557,915 | 0 | 0 | 678 | 0 | python,linux | The second command will not hang because the issue is not with a large amount of data on standard output, but a large amount of data on standard error.
In the former case, standard error is being redirected to standard output, which is being piped to your program. Hence, a large amount of data being produced on standard error gives an equivalent result to a large amount of data being produced on standard output.
In the latter case, the subprocess's standard error is redirected to the calling process's standard error, and hence can't get stuck in the pipe. | 0 | 1 | 0 | 0 | 2016-05-25T16:32:00.000 | 1 | 0 | false | 37,442,986 | 0 | 0 | 0 | 1 | From the documentation of Popen.wait(), I see
Warning This will deadlock when using stdout=PIPE and/or stderr=PIPE
and the child process generates enough output to a pipe such that it
blocks waiting for the OS pipe buffer to accept more data. Use
communicate() to avoid that.
I am having a bit of trouble understanding the behavior below as the command run below can generate a fairly large amount of standard out.
However, what I notice is that
subproc = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
will hang.
While
subproc = subprocess.Popen(command, stdout=subprocess.PIPE)
will not hang.
If the command is generating a large amount of standard out, why does the second statement not hang as we are still using stdout=subprocess.PIPE? |
Python FileNotFoundError when using open() | 37,445,004 | 0 | 1 | 1,226 | 0 | python,linux,command-line | Let's say you run pwd and it returns /home/myName. If you then run /home/myName/code/myProgram.py, the working directory of your program is not /home/myName/code; it's /home/myName. The working directory of a process in inherited from the parent process, not set based on where the script is located. | 0 | 1 | 0 | 0 | 2016-05-25T17:49:00.000 | 2 | 0 | false | 37,444,362 | 1 | 0 | 0 | 1 | I'm using with open('myFile', 'rb') as file: to read a file. When running the program with python myProgram.py everything works fine. But as soon I try to run it without cd-ing into the directory of myProgram.py and use an absolute path instead (like python /home/myName/myCode/myProgram.py I always get this error message: FileNotFoundError: [Errno 2] No such file or directory.
So why does open() behave differently depending on how the Python program is started? And is there a way to make things work even if starting with an absolute path?
I've already tried open('/home/myName/myCode/myfile', 'rb') but without success... |
What linux signals should I trap to make a good application | 37,451,543 | 4 | 2 | 102 | 0 | python,linux | Trap sigint, sigterm and make sure to clean up anything like sockets, files, locks, etc.
Trap other signals based on what you are doing. For instance if you have open pipes you might trap sigpipe.
Just remember signal handling opens you to race conditions. You probably want to use sigprocmask to disable signals while handling them. | 0 | 1 | 0 | 0 | 2016-05-26T01:50:00.000 | 1 | 1.2 | true | 37,450,288 | 0 | 0 | 0 | 1 | I'm creating a program in python that auto-runs for an embedded system and want to handle program interruptions gracefully. That is if possible close resources and signal child processes to also exit gracefully before actually exiting. At which point my watchdog should notice this and respawn everything.
What signals can/should I expect to receive in a non-interactive program from linux? I'm using try/except blocks to handle i/o errors.
Is the system shutting down an event that is signaled? In addition to my watchdog I will also have an independent process monitoring a hardware line that gets set when my power supply detects a brownout. I have a supercap to provide some run-time to allow a proper shutdown. |
Pipelining or Otherwise Transferring Data Between Languages in Realtime | 37,495,749 | 1 | 3 | 150 | 0 | java,python,c++,pipelining | We had same issue where we had to share sensor data between one Java app to other multiple apps including Java,Python and R.
First we tried Socket connections but socket communication were not fault tolerant. Restarting or failure in one app affected other.
Then we tried RMI calls between them but again we were unhappy due to scalability.
We wanted system to be reliable, scalable, distributed and fault tolerant. So, finally we started using RabbitMQ where we created one producer and multiple consumers. It worked well for 2 years. you may consider using Apache Kafka.
You have options like Socket pipes, RMI calls, RabbitMQ, Kafka, Redis based on your system requirements now and in near future. | 0 | 1 | 0 | 1 | 2016-05-26T23:39:00.000 | 2 | 0.099668 | false | 37,472,688 | 0 | 0 | 0 | 1 | I'm working on a project, which I am not at liberty to discuss the core, but I have reached a stumbling block. I need data to be transferred from C++ to some other language, preferably Java or Python, in realtime (~10ms latency).
We have a sensor that HAS to be parsed in C++. We are planning on doing a data read/output through bluetooth, most likely Java or C# (I don't quite know C#, but it seems similar to Java). C++ will not fit the bill, since I do not feel advanced enough to use it for what we need. The sensor parsing is already finished. The data transferring will be happening on the same machine.
Here are the methods I've pondered:
We tried using MatLab with whatever the Mex stuff is (I don't do MatLab) to access functions from our C++ program, to retrieve the data as an array. Matlab will be too slow (we read somewhere that the TX/RX will be limited to 1-20 Hz.)
Writing the data to a text, or other equivalent raw data, file constantly, and opening it with the other language as necessary.
I attempted to look this up, but nothing of use showed in the results. |
PL/Python in PostgreSQL | 42,190,637 | 0 | 0 | 278 | 1 | python,postgresql,plpython | Yes, it will, the package is independent from standard python installation. | 0 | 1 | 0 | 0 | 2016-05-27T15:19:00.000 | 1 | 1.2 | true | 37,487,072 | 0 | 0 | 0 | 1 | I was writing a PL/Python function for PostgreSQl, with Python 2.7 and Python 3.5 already installed on Linux.
When I was trying to create extension plpythonu, I got an error, then I fixed executing in terminal the command $ sudo apt-get install postgresql-contrib-9.3 postgresql-plpython-9.3. I understand that this is some other package.
If I will not have Python 2.7/3.5 installed, and I will install the plpython package, will the user defined function still work? Is some how the PL/Python depending on Python? |
Python: Opening a program in a new terminal [Linux] | 37,494,585 | 4 | 3 | 4,772 | 0 | python,linux,raspberry-pi | I finally figured it out, but wanted to post the solution so others can find it in the future.
Subprogram = Popen(['lxterminal', '-e', 'python ./Foo.py'], stdout=PIPE)
The lxterminal is the Raspberry Pi's terminal name, -e is required, python ./Foo.py launches the python file, and stdout=PIPE displays the output on the new terminal window.
Running the above launches Foo.py in a new terminal window, and allows the user to terminate the Foo.py process if desired. | 0 | 1 | 0 | 0 | 2016-05-27T22:49:00.000 | 2 | 0.379949 | false | 37,493,341 | 1 | 0 | 0 | 1 | I am writing a bootstrap program that runs several individual programs simultaneously. Thus, I require each sub-program to have its own terminal window, in a manner that gives me the ability to start/stop each sub-program individually within the bootstrap.
I was able to do this on Windows using Popen and CREATE_NEW_CONSOLE (each sub-program has it's own .py file), however I am having trouble achieving this with Linux. I am using a Raspberry Pi and Python 2.7.9.
I have tried:
Subprogram = Popen([executable, 'Foo.py'], shell=True)
However this does not seem to create a new window.. and
os.system("python ./Foo.py")
Does not seem to create a new window nor allow me to terminate the process.
Other research has thus far proved unfruitful..
How can I do this? Many thanks in advance. |
Geany not running Python | 37,499,554 | 1 | 1 | 863 | 0 | python,geany | Execute: C:\Python35\python ¨%f¨
This string contains the diaeresis character (¨) (U+00A8) instead of the double quote (") (U+0022). | 0 | 1 | 0 | 0 | 2016-05-28T12:42:00.000 | 1 | 0.197375 | false | 37,499,130 | 0 | 0 | 0 | 1 | I'm a new user of Python
I installed Python35 for Windows. Hello.py runs fine in a terminal.
When trying to run the same in Geany the path is not found.
These are the settings in Geany:
Compile: C:\Python35\python -m py_compile "%f"
Execute: C:\Python35\python ¨%f¨
What I'm doing wrong? |
Attribute system similar to HTTP Headers for local files | 37,501,333 | 1 | 0 | 62 | 0 | python,file-io,go,attributes,metadata | If you are dealing with binary files like docx and pdf, you're best off storing the metadata in seperate files or in a sqlite file.
Metadata is usually stored seperate from files, in data structures called inodes (at least in Unix systems, Windows probably has something similar). But you probably don't want to get that deep into the rabbit hole.
If your goal is to query the system based on metadata, then it would be easier and more efficient to use something SQLite. Having the meta data in the file would mean that you would need to open the file, read it into memory from disk, and then check the meta data - i.e slower queries.
If you don't need to query based on metadata, then storing metadata in the file might make sense. It would reduce the dependencies in your application, but in order to access the contents of the file through Word or Adobe Reader, you'd need to strip the metadata before handing it off to the application. Not worth the hassle, usually | 0 | 1 | 0 | 0 | 2016-05-28T15:27:00.000 | 2 | 0.099668 | false | 37,500,810 | 0 | 0 | 0 | 1 | I am in the process of writing a program and need some guidance. Essentially, I am trying to determine if a file has some marker or flag attached to it. Sort of like the attributes for a HTTP Header.
If such a marker exists, that file will be manipulated in some way (moved to another directory).
My question is:
Where exactly should I be storing this flag/marker? Do files have a system similar to HTTP Headers? I don't want to access or manipulate the contents of the file, just some kind of property of the file that can be edited without corrupting the actual file--and it must be rather universal among file types as my potential domain of file types is unbound. I have some experience with Web APIs so I am familiar with HTTP Headers and json. Does any similar system exist for local files in windows? I am especially interested in anyone who has professional/industry knowledge of common techniques that programmers use when trying to store 'meta data' in files in order to access them later. Or if anyone knows of where to point me, as I am unsure to what I should be researching.
For the record, I am going to write a program for Windows probably using Golang or Python. And the files I am going to manipulate will be potentially all common ones (.docx, .txt, .pdf, etc.) |
Route testing with Tornado | 37,504,714 | 3 | 1 | 129 | 0 | python,python-3.x,tornado,pytest | No, it is not currently possible to test this in Tornado via any public interface (as of Tornado version 4.3).
It's straightforward to avoid spinning up a server, although it requires a nontrivial amount of code: the interface between HTTPServer and Application is well-defined and documented. The trickier part is the other side: there is no supported way to determine which handler will be invoked before that handler is invoked.
I generally recommend testing routing via end-to-end tests for this reason. You could also store your URL route list before passing it into Tornado, and do your tests against that - the internal logic of "take the first regex match" is pretty easy to replicate. | 0 | 1 | 0 | 1 | 2016-05-28T23:09:00.000 | 1 | 1.2 | true | 37,504,566 | 0 | 0 | 1 | 1 | I'm new to Tornado, and working on a project that involves some rather complex routing. In most of the other frameworks I've used I've been able to isolate routing for testing, without spinning up a server or doing anything terribly complex. I'd prefer to use pytest as my testing framework, but I'm not sure it matters.
Is there a way to, say, create my project's instance of tornado.web.Application, and pass it arbitrary paths and assert which RequestHandler will be invoked based on that path? |
pip: "/Volumes/HD: bad interpreter: No such file or directory | 37,534,734 | 2 | 2 | 444 | 0 | python,pip,python-3.5 | The space in the name of your disk ("HD 2") is tripping things up. The path to the Python interpreter (which is going to be /Volumes/HD 2/Projects/PythonProjects/BV/bin/python) is getting split on the space, and the system is trying to execute /Volumes/HD.
You'd think that in 2016 your operating system ought to be able to deal with this. But it can't, so you need to work around it:
Rename "HD 2" to something that doesn't contain a space.
Re-create the virtualenv. | 0 | 1 | 0 | 0 | 2016-05-30T20:59:00.000 | 1 | 1.2 | true | 37,533,613 | 1 | 0 | 0 | 1 | After I activated my Virtualenv i received this message:
Francos-MBP:BV francoe$source bin/activate
(BV) Francos-MBP:BV francoe$ pip freeze
-bash: /Volumes/HD 2/Projects/PythonProjects/BV/bin/pip: "/Volumes/HD: bad interpreter: No such file or directory
(BV) Francos-MBP:BV francoe$ pip install --upgrade pip -bash:
/Volumes/HD 2/Projects/PythonProjects/BV/bin/pip: "/Volumes/HD: bad
interpreter: No such file or directory
At the moment I am not able set up any virtualenv ..
[p.s. I have multiple versions of python (3.5 and systems version 2.7)]
Can anyone help me?
Thank you |
How to find what version of dawg I have? | 37,661,187 | 0 | 0 | 49 | 0 | python,import,python-import,dawg | Since dawg was installed with pip, the best way to learn what version I have is with pip list installed. (Credit to larsks' comment.) | 0 | 1 | 0 | 0 | 2016-05-31T13:11:00.000 | 2 | 1.2 | true | 37,546,726 | 1 | 0 | 0 | 1 | How can I find what version of dwag I have installed in python? Usually packagename.version does the trick, but dawg seems to lack the relevant methods. |
Python, choose logging files' directory | 37,547,070 | 9 | 15 | 34,734 | 0 | python,logging | Simple give a different filename like filename=r"C:\User\Matias\Desktop\myLogFile.log | 0 | 1 | 0 | 0 | 2016-05-31T13:13:00.000 | 4 | 1.2 | true | 37,546,770 | 1 | 0 | 0 | 1 | I am using the Python logging library and want to choose the folder where the log files will be written.
For the moment, I made an instance of TimedRotatingFileHandler with the entry parameter filename="myLogFile.log" . This way myLogFile.log is created on the same folder than my python script. I want to create it into another folder.
How could I create myLogFile.log into , let's say, the Desktop folder?
Thanks,
Matias |
Should I use Popen's wait or communicate to read stdout in subprocess in Python 3? | 37,577,952 | 2 | 1 | 1,281 | 0 | python,subprocess | I think you should use communicate. The message warns you about performance issues with the default behaviour of the method. In fact, there's a buffer size parameter to the popen constructor that can be tuned to improve a lot performance for large data size.
I hope it will help :) | 0 | 1 | 0 | 0 | 2016-06-01T20:05:00.000 | 1 | 0.379949 | false | 37,577,819 | 0 | 0 | 0 | 1 | I am trying to run a subprocess in Python 3 and constantly read the output.
In the documentation for subprocess in Python 3 I see the following:
Popen.wait(timeout=None)
Wait for child process to terminate. Set and return returncode attribute.
Warning This will deadlock when using stdout=PIPE and/or stderr=PIPE
and the child process generates enough output to a pipe such that it
blocks waiting for the OS pipe buffer to accept more data. Use
communicate() to avoid that.
Which makes me think I should use communicate as the amount of data from stdout is quite large. However, reading the documentation again shows this:
Popen.communicate(input=None, timeout=None)...
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached.
Note The data read is buffered in memory, so do not use this method if the data size is large or
unlimited.
So again, it seems like there are problems with reading starndard out from subprocesses this way. Can someone please tell me the best / safest way to run a subprocess and read all of its (potentially large amount) of stdout? |
Is it safe to set python bin in $PATH to another python version? | 37,594,156 | 1 | 0 | 55 | 0 | python,debian,anaconda | My question is: will programs, that depends from python command and
expects python2, work correctly?
Those programs should use full path of the python binary. Something like /usr/bin/python, and so $PATH is irrelevant. As long as you don't change /usr/bin/python, nothing will break.
If you remove the stuff that Anaconda has added, it's likely that Anaconda will not work properly. | 0 | 1 | 0 | 0 | 2016-06-02T13:20:00.000 | 2 | 1.2 | true | 37,593,092 | 1 | 0 | 0 | 1 | I have installed Anaconda3 just now, and I noticed that now, when I run python command from terminal, Python 3.5.1 |Anaconda 4.0.0 (64-bit)| is starting. Anaconda installer had added path to anaconda dir in $PATH and there is symlink from python to python3.5
My question is: will programs, that depends from python command and expects python2, work correctly, or I should remove symlink python from anaconda dir? |
pip is not recoginzed as an internal or external command | 37,618,999 | 0 | 0 | 32 | 0 | windows,python-2.7,pip | So I found my issue. Was running cmd prompt from P4 directly which was my issue. When running cmd prompt outside of P4 I was able to run pip with no issues. | 0 | 1 | 0 | 0 | 2016-06-02T21:39:00.000 | 2 | 1.2 | true | 37,602,576 | 1 | 0 | 0 | 2 | I have tried setting my System variable Path=C:\Python27\Scripts\
I also have set User Variable Path=C:\Python27\
I can see it return correctly when I echo %PATH%
What else is needed for pip to work correctly? |
pip is not recoginzed as an internal or external command | 37,607,502 | 0 | 0 | 32 | 0 | windows,python-2.7,pip | You need to install pip too.
For convinience you can use some package manager to get started and going. Google for python package managers (anacond etc) | 0 | 1 | 0 | 0 | 2016-06-02T21:39:00.000 | 2 | 0 | false | 37,602,576 | 1 | 0 | 0 | 2 | I have tried setting my System variable Path=C:\Python27\Scripts\
I also have set User Variable Path=C:\Python27\
I can see it return correctly when I echo %PATH%
What else is needed for pip to work correctly? |
Is Google Cloud Datastore a Column Oriented NoSQL database? | 37,609,672 | 3 | 1 | 429 | 1 | python,google-app-engine,google-cloud-datastore | Strictly speaking, Google Cloud Datastore is distributed multi-dimensional sorted map. As you mentioned it is based on Google BigTable, however, it is only a foundation.
From high level point of view Datastore actually consists of three layers.
BigTable
This is a necessary base for Datastore. Maps row key, column key and timestamp (three-dimensional mapping) to an array of bytes. Data is stored in lexicographic order by row key.
High scalability and availability
Strong consistency for single row
Eventual consistency for multi-row level
Megastore
This layer adds transactions on top of the BigTable.
Datastore
A layer above Megastore. Enables to run queries as index scans on BigTable. Here index is not used for performance improvement but is required for queries to return results.
Furthermore, it optionally adds strong consistency for multi-row level via ancestor queries. Such queries force the respective indexes to update before executing actual scan. | 0 | 1 | 0 | 0 | 2016-06-02T21:41:00.000 | 1 | 0.53705 | false | 37,602,604 | 0 | 0 | 1 | 1 | From my understanding BigTable is a Column Oriented NoSQL database. Although Google Cloud Datastore is built on top of Google’s BigTable infrastructure I have yet to see documentation that expressively says that Datastore itself is a Column Oriented database. The fact that names reserved by the Python API are enforced in the API, but not in the Datastore itself makes me question the extent Datastore mirrors the internal workings of BigTable. For example, validation features in the ndb.Model class are enforced in the application code but not the datastore. An entity saved using the ndb.Model class can be retrieved someplace else in the app that doesn't use the Model class, modified, properties added, and then saved to datastore without raising an error until loaded into a new instance of the Model class. With that said, is it safe to say Google Cloud Datastore is a Column Oriented NoSQL database? If not, then what is it? |
Gevent's libev, and Twisted's reactor | 71,033,066 | 0 | 2 | 318 | 0 | python,events,asynchronous,twisted,gevent | Short answer: Twisted is a network framework. Gevent tries to act as a library without requiring from the programmer to change the way he programs. That's their focus.. and not so much how that is achieved under the hood.
Long answer:
All asyncio libraries (Gevent, Asyncio, etc.) work pretty much the same:
Have a main loop running endlessly on a single thread.
When an event occurs, it's captured by the main loop.
The main loop decides based on different rules (scheduling) if it should continue checking for events or switch temporarily and give control to any subscriber functions to the event.
greenlet is a different library. It's very simple in that it just changes the order that Python code is run and lets you change jumping back and forth between functions. Gevent uses it under the hood to implement its async features.
asyncio which comes with Python3 is like gevent. The big difference is the interface again. It requires the programmer to mark functions with async and allow him to explicitly wait for a subscribed function in the main loop with await.
Gevent is like asyncio. But instead of the keywords it patches existing code where appropriate. It uses greenlet under the hood to switch between main loop and subscribed functions and make it all work seamlessly.
Twisted as mentioned feels more like a framework than a library. It requires the programmer to follow very specific ways to achieve concurrency. Again though it has a main loop under the hood called reactor like everything else.
Back to your initial question: You can in theory replace the reactor with any loop (including gevent). But that would defeat the purpose. Probably Twisted's team decided to use their own version of a main loop for optimisation reasons. All these libraries use different scheduling in their main loops to meet their needs. | 0 | 1 | 0 | 1 | 2016-06-04T10:52:00.000 | 1 | 0 | false | 37,629,312 | 0 | 0 | 0 | 1 | I'm trying to figure out how Gevent works with respect to other asynchronous frameworks in python, like Twisted.
The key difference between Gevent and Twisted is that Gevent uses greenlets and monkey patching the standard library for an implicit behavior and a synchronous programming model whereas Twisted requires specific libraries and callbacks for an explicit behavior. The event loop in Gevent is libev/libevent, which is written in C, and the event loop in Twisted is the reactor, which is written in python.
Is there anything special about libev/libevent that allows for this implicit behavior? Why not use an event loop written in Python? Conversely, why isn't Twisted using libev/libevent? Is there any particular reason? Maybe it was simply a design choice and could have gone either way...
Theoretically, can Gevent's libev be replaced with another event loop, written in python, like Twisted's reactor? And can Twisted's reactor be replaced with libev? |
Determine if a python program is running on WINE | 37,960,529 | 0 | 0 | 365 | 0 | python,pyserial,wine | First of all, and this is untested, try creating a symlink from .wine/dosdevices/COM1 to /dev/ttyS0. It should simply allow you to open the com port the Windows way.
If, however, you are determined to know whether you are running on Wine, the "official" way is to check whether the registry has the key "HKEY_LOCAL_MACHINE\Software\Wine".
Either way, if opening COM1 doesn't work on Wine, it is a bug and should be filed with the Wine bugzilla. | 0 | 1 | 0 | 0 | 2016-06-05T01:10:00.000 | 1 | 1.2 | true | 37,636,509 | 0 | 0 | 0 | 1 | I can check for Linux/Windows/cygwin/etc. with sys.platform, but on WINE it just reports 'win32'.
I am attempting to write a multi-platform application that uses pyserial, and I am using WINE to test setup of a Windows environment. On Windows serial ports are named COMxx, but on Linux they are /dev/ttyxxx. However, on WINE the serial ports have Linux names. I need to detect if it is running on WINE separate from Windows so I can handle this properly. |
How to handle google api oauth in this app? | 37,647,153 | 2 | 0 | 45 | 0 | python,google-oauth | You can create a server-side script in which you use Google OAuth to upload videos to A's account.
Then you can create a client-side app which allows your clients B and C to upload their videos to the server; on completion, the server can then upload them to A's account.
Alternatively, to avoid uploading twice, if you trust the clients and would like them to be able to upload directly, you can pass them an OAuth access token to A's account. | 0 | 1 | 1 | 0 | 2016-06-05T20:53:00.000 | 1 | 1.2 | true | 37,646,652 | 0 | 0 | 0 | 1 | My client asked me to build a tool that would let him and his partners to upload video to youtube to his channel automatically .
For example let's say that my client is A and he has some buisness partners . A want to be able to upload videos to his channel, that is easy to do, but the problem here is to let other parners B and C to upload their videos to His channel (channel of the person of A) .
In this case I would need "A" to auth my app so he can upload videos to his own channel, but how can I handle that for other users . How can users use the access token of the person "A" to upload videos to his channel ?
What I've done so far ?
I've got the youtube upload python sample from google api docs and played with it a bit. I tried to subprocess.Popen(cmd) where cmd is the following command : python upload.py --file "video name" --title "title of the vid" .
This will lead the user to auth my app once , that's only fine for the "A" person .The others won't be able to do that, since they need to upload the vid to A's account . |
Multiple threads inside docker container | 37,692,379 | 16 | 32 | 42,455 | 0 | python,multithreading,docker | A container as such has nothing to do with the computation you need to perform. The question you are posting is whether I should have multiple processes doing my processing or multiple threads spawned by the same process doing the processing ?
A container is just a platform for running your application in the environment you want. Period. It means, you would be running a process inside a container to run your business logic. Multiple containers simply means multiple processes and as it is advised, you should go for multiple threads rather than multiple processes as spawning a new process (in your case, as container) would eat up more resources and would also require more memory etc. So it is better to have just one container which will spawn multiple threads to do the job for you.
However, it also depends upon the configuration of the underlying machine on which the container is started. If it makes sense to spawn multiple containers with multiple threads because of the multicore capabilities of the underlying hardware, you should do that as well. | 0 | 1 | 0 | 0 | 2016-06-06T12:18:00.000 | 2 | 1 | false | 37,657,280 | 1 | 0 | 0 | 1 | I need to spawn N threads inside a docker container. I am going to receive a list of elements, then divide it in chunks and each thread will process each chunk.
So I am using a docker container with one process and N threads. Is it good practice in docker? I think so, because we have, e.g, apacha webserver that handle connections spawining threads.
Or it will be better to spawn N container each one for each chunk? If it is, what is the correct way to do this? |
How to use virtualenv inside a folder? | 37,693,496 | 0 | 0 | 94 | 0 | python,virtualenv | This is pretty simpleJust goto environment folder
Try: Scripts/activate
This will activate the environment
Try:Scripts/deactivate
This will deactivate current environment | 0 | 1 | 0 | 0 | 2016-06-08T04:17:00.000 | 1 | 0 | false | 37,693,154 | 1 | 0 | 0 | 1 | I try to use virtualenv inside a folder using the command virtualenv . and getting the error -bash: virtualenv: command not found. However, I installed virtualenv using pip pip installed virtualenv and also, upgraded it earlier with sudo pip install virtualenv.
How to use virtualenv properly ? I'm following a tutorial and they seems doing the same and gets away with it. I'm a Java developer with beginner knowledge of Python and working to improve it. |
Plug-in org.python.pydev was unable to load class org.python.pydev.editor.PyEdit | 37,706,507 | 0 | 2 | 6,842 | 0 | python,eclipse,pydev | I have absolutely no idea why it solved the problem, but updating the Maven Plugins solved the problem. | 0 | 1 | 0 | 0 | 2016-06-08T14:30:00.000 | 1 | 1.2 | true | 37,705,427 | 0 | 0 | 0 | 1 | My Eclipse stalled so I shut it down (normally, I didn't send any kill signal or anything, the editor was bugging but the menu was still working so I simply quit it from the menu).
When I reopened eclipse however I got the problem:
Plug-in org.python.pydev was unable to load class org.python.pydev.editor.PyEdit.
I am using Eclipse Kepler Release 2 Build id: 20140224-0627
with Java 8 and
PyDev 4.5.4.20160129223
I have tried rebuilding the workspace, cleaning the workspace, restarting it, but nothing works. I have now updated PyDev to PyDev 5 and it still gives me the same error.
Additionally, the Package Explorer can't load either and gives the error:
Plug-in org.eclipse.jdt.ui was unable to load class org.eclipse.jdt.internal.ui.packageview.PackageExplorerPart.
Any ideas?
The exact traceback is:
org.eclipse.core.runtime.CoreException: Plug-in org.python.pydev was unable to load class org.python.pydev.editor.PyEdit.
at org.eclipse.core.internal.registry.osgi.RegistryStrategyOSGI.throwException(RegistryStrategyOSGI.java:194)
at org.eclipse.core.internal.registry.osgi.RegistryStrategyOSGI.createExecutableExtension(RegistryStrategyOSGI.java:178)
at org.eclipse.core.internal.registry.ExtensionRegistry.createExecutableExtension(ExtensionRegistry.java:905)
at org.eclipse.core.internal.registry.ConfigurationElement.createExecutableExtension(ConfigurationElement.java:243)
at org.eclipse.core.internal.registry.ConfigurationElementHandle.createExecutableExtension(ConfigurationElementHandle.java:55)
at org.eclipse.ui.internal.WorkbenchPlugin.createExtension(WorkbenchPlugin.java:274)
at org.eclipse.ui.internal.registry.EditorDescriptor.createEditor(EditorDescriptor.java:235)
at org.eclipse.ui.internal.EditorReference.createPart(EditorReference.java:318)
at org.eclipse.ui.internal.e4.compatibility.CompatibilityPart.createPart(CompatibilityPart.java:266)
at org.eclipse.ui.internal.e4.compatibility.CompatibilityEditor.createPart(CompatibilityEditor.java:61)
at org.eclipse.ui.internal.e4.compatibility.CompatibilityPart.create(CompatibilityPart.java:304)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.eclipse.e4.core.internal.di.MethodRequestor.execute(MethodRequestor.java:56)
at org.eclipse.e4.core.internal.di.InjectorImpl.processAnnotated(InjectorImpl.java:877)
at org.eclipse.e4.core.internal.di.InjectorImpl.processAnnotated(InjectorImpl.java:857)
at org.eclipse.e4.core.internal.di.InjectorImpl.inject(InjectorImpl.java:119)
at org.eclipse.e4.core.internal.di.InjectorImpl.internalMake(InjectorImpl.java:333)
at org.eclipse.e4.core.internal.di.InjectorImpl.make(InjectorImpl.java:254)
at org.eclipse.e4.core.contexts.ContextInjectionFactory.make(ContextInjectionFactory.java:162)
at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.createFromBundle(ReflectionContributionFactory.java:102)
at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.doCreate(ReflectionContributionFactory.java:71)
at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.create(ReflectionContributionFactory.java:53)
at org.eclipse.e4.ui.workbench.renderers.swt.ContributedPartRenderer.createWidget(ContributedPartRenderer.java:129)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createWidget(PartRenderingEngine.java:949)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:633)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.StackRenderer.showTab(StackRenderer.java:1147)
at org.eclipse.e4.ui.workbench.renderers.swt.LazyStackRenderer.postProcess(LazyStackRenderer.java:96)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:649)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:62)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:645)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$6.run(PartRenderingEngine.java:526)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:511)
at org.eclipse.e4.ui.workbench.renderers.swt.ElementReferenceRenderer.createWidget(ElementReferenceRenderer.java:61)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createWidget(PartRenderingEngine.java:949)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:633)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:62)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:645)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:62)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:645)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:62)
at org.eclipse.e4.ui.workbench.renderers.swt.PerspectiveRenderer.processContents(PerspectiveRenderer.java:59)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:645)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.PerspectiveStackRenderer.showTab(PerspectiveStackRenderer.java:103)
at org.eclipse.e4.ui.workbench.renderers.swt.LazyStackRenderer.postProcess(LazyStackRenderer.java:96)
at org.eclipse.e4.ui.workbench.renderers.swt.PerspectiveStackRenderer.postProcess(PerspectiveStackRenderer.java:77)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:649)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:62)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:645)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:62)
at org.eclipse.e4.ui.workbench.renderers.swt.WBWRenderer.processContents(WBWRenderer.java:581)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:645)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:735)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:706)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:700)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:685)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$9.run(PartRenderingEngine.java:1042)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:997)
at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:140)
at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:611)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:567)
at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:150)
at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:124)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:354)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:181)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:636)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:591)
at org.eclipse.equinox.launcher.Main.run(Main.java:1450)
at org.eclipse.equinox.launcher.Main.main(Main.java:1426)
Caused by: java.lang.NoClassDefFoundError: org/eclipse/ui/editors/text/TextEditor
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.defineClass(DefaultClassLoader.java:188)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClassHoldingLock(ClasspathManager.java:638)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClass(ClasspathManager.java:613)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findClassImpl(ClasspathManager.java:574)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClassImpl(ClasspathManager.java:492)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:465)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216)
at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:395)
at org.eclipse.osgi.internal.loader.SingleSourcePackage.loadClass(SingleSourcePackage.java:35)
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:461)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.defineClass(DefaultClassLoader.java:188)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClassHoldingLock(ClasspathManager.java:638)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClass(ClasspathManager.java:613)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findClassImpl(ClasspathManager.java:574)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClassImpl(ClasspathManager.java:492)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:465)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216)
at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:395)
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:464)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.defineClass(DefaultClassLoader.java:188)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClassHoldingLock(ClasspathManager.java:638)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClass(ClasspathManager.java:613)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findClassImpl(ClasspathManager.java:574)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClassImpl(ClasspathManager.java:492)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:465)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216)
at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:395)
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:464)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.eclipse.osgi.internal.loader.BundleLoader.loadClass(BundleLoader.java:340)
at org.eclipse.osgi.framework.internal.core.BundleHost.loadClass(BundleHost.java:229)
at org.eclipse.osgi.framework.internal.core.AbstractBundle.loadClass(AbstractBundle.java:1212)
at org.eclipse.core.internal.registry.osgi.RegistryStrategyOSGI.createExecutableExtension(RegistryStrategyOSGI.java:174)
... 120 more
Caused by: org.eclipse.core.runtime.internal.adaptor.EclipseLazyStarter$TerminatingClassNotFoundException: An error occurred while automatically activating bundle org.eclipse.ui.editors (216).
at org.eclipse.core.runtime.internal.adaptor.EclipseLazyStarter.postFindLocalClass(EclipseLazyStarter.java:124)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:469)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216)
at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:395)
at org.eclipse.osgi.internal.loader.SingleSourcePackage.loadClass(SingleSourcePackage.java:35)
at org.eclipse.osgi.internal.loader.MultiSourcePackage.loadClass(MultiSourcePackage.java:31)
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:452)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at com.laboki.eclipse.plugin.smartsave.main.EditorContext.(EditorContext.java:81)
at com.laboki.eclipse.plugin.smartsave.task.AsyncTask$1.runTask(AsyncTask.java:17)
at com.laboki.eclipse.plugin.smartsave.task.TaskJob.run(TaskJob.java:28)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:53)
Caused by: org.osgi.framework.BundleException: Exception in org.eclipse.ui.internal.editors.text.EditorsPlugin.start() of bundle org.eclipse.ui.editors.
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:734)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683)
at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381)
at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:300)
at org.eclipse.osgi.framework.util.SecureAction.start(SecureAction.java:478)
at org.eclipse.osgi.internal.loader.BundleLoader.setLazyTrigger(BundleLoader.java:263)
at org.eclipse.core.runtime.internal.adaptor.EclipseLazyStarter.postFindLocalClass(EclipseLazyStarter.java:109)
... 14 more
Caused by: org.eclipse.swt.SWTException: Invalid thread access
at org.eclipse.swt.SWT.error(SWT.java:4397)
at org.eclipse.swt.SWT.error(SWT.java:4312)
at org.eclipse.swt.SWT.error(SWT.java:4283)
at org.eclipse.swt.widgets.Display.error(Display.java:1204)
at org.eclipse.swt.widgets.Display.checkDevice(Display.java:759)
at org.eclipse.swt.widgets.Display.disposeExec(Display.java:1181)
at org.eclipse.jface.resource.ColorRegistry.hookDisplayDispose(ColorRegistry.java:268)
at org.eclipse.jface.resource.ColorRegistry.(ColorRegistry.java:123)
at org.eclipse.jface.resource.ColorRegistry.(ColorRegistry.java:106)
at org.eclipse.ui.internal.themes.WorkbenchThemeManager.(WorkbenchThemeManager.java:98)
at org.eclipse.ui.internal.themes.WorkbenchThemeManager.getInstance(WorkbenchThemeManager.java:58)
at org.eclipse.ui.internal.Workbench.getThemeManager(Workbench.java:3232)
at org.eclipse.ui.internal.editors.text.EditorsPlugin.start(EditorsPlugin.java:214)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702)
... 20 more |
pyAudio and PJSIP in a Virtual Machine | 37,734,848 | 0 | 1 | 689 | 0 | python,audio,amazon-ec2,pjsip,pyaudio | Alright, this isn't the most reliable solution but it does seem to work.
To start with you must verify you have pulseaudio installed and working
Use what ever package installer you need:
apt-get/yum/zypper pulseaudio pulseaudio-devel alsa-lib alsa-devel alsa-plugins-pulseaudio
pulseaudio --start
pacmd load-module module-null-sink sink_name=MySink
pacmd update-sink-proplist MySink device.description=MySink
This will allow you to pass audio around in your vm so that it can be sent out using pjsip.
If you dont have your own loopback written in python you can use:
pacmd load-module module-loopback sink=MySink
to pass audio back out. If you do have a loopback written you cannot use both. | 0 | 1 | 0 | 0 | 2016-06-08T20:26:00.000 | 1 | 1.2 | true | 37,712,275 | 0 | 0 | 1 | 1 | I am writing a SIP client in python. I am able to make my script run on my computer just fine. It plays a wav file, grabs the audio and then sends the audio out using a sip session. I am having a very hard time getting this to run in the AWS ec2 VM. The VM is running SUSE 12.
There seems to be a lot of questions related to audio loop backs and piping audio around. But I haven't found any that seem to encompass all of the ways I am having issues.
I have tried figuring out how to set one up using pacmd but havent had and luck. I have Dummy Output and Monitor of Dummy Output as defaults but that didnt work.
When I try to open the stream i still get a no default output device error.
What I am trying to find is a way to have a virtual sound card (i guess) that I can have for channels on the sip call and stream the wav file into.
Any advice or direction would be very helpful.
Thanks in advance |
Python Twisted frameowrk transport.write to a c# socket BeginReceive reading length based message framing value | 37,736,839 | 0 | 0 | 72 | 0 | c#,python,sockets,twisted,beginreceive | Sorry the question was badly asked. I did find the solution though.
int netmsg_size = BitConverter.ToInt32(state.buffer, 0);
int msg_size = IPAddress.NetworkToHostOrder(netmsg_size);
This converts the network integer back into a regular integer. | 1 | 1 | 0 | 0 | 2016-06-09T06:19:00.000 | 1 | 0 | false | 37,718,216 | 0 | 0 | 0 | 1 | I'm using length based message framing with python twisted framework with a C# client running BeginRecieve async reads and I'm having trouble grabbing the value of the length of the message.
This is the twisted python code
self.transport.write(pack(self.structFormat, len(string)) + string)
And this is the C# code:
int bytesRead = client.EndReceive(ar);
if (bytesRead > 0)
{
int msg_size = BitConverter.ToInt32(state.buffer, 0);
Problem is the len(string) value is not correct when I grab it via Bitconverter on the c# side.
The value should be 15 but its coming across as 251658240.
Any insight would be much appreciated. |
Set executable permission from Windows host on Linux filesystem | 37,723,343 | 1 | 0 | 48 | 0 | python | Short answer: No.
Slightly longer answer: It would perhaps not be impossible to write a Windows Samba driver which supports this, but you seem to be asking for an existing solution. | 0 | 1 | 0 | 0 | 2016-06-09T10:12:00.000 | 1 | 1.2 | true | 37,723,041 | 0 | 0 | 0 | 1 | I have a python script which runs on a Windows machine. On this machine I have mounted a Samba filesystem (on a Linux host).
When I now try to change file permissions on the filesystem with os.chmod(S_IXUSR) it doesn't set the excutable permission, since it is hardcoded in Windows to not do anything as some research I did suggests.
Do I have any chance to change unix file permissions from a Windows host using python? |
How to repeat the last command on the command-line in the python debugger, PuDB | 37,736,539 | 20 | 13 | 2,124 | 0 | python,pudb | Ctrl-n/p - browse command line history | 0 | 1 | 0 | 0 | 2016-06-09T20:57:00.000 | 1 | 1 | false | 37,736,033 | 0 | 0 | 0 | 1 | I'm on Linux and expected it to work like pdb, gdb, i.e., press enter to repeat the last command. I understand the debugger has a Variables watch window. |
How to make a .py file to .exe from Ubuntu to run it on Windows? | 70,057,993 | -1 | 1 | 216 | 0 | python-3.x,ubuntu,exe | in my opinion its not possible to create some executables in linux for windows | 0 | 1 | 0 | 0 | 2016-06-10T11:05:00.000 | 1 | -0.197375 | false | 37,746,658 | 1 | 0 | 0 | 1 | I have made a python script using python3.5, it uses many packages like tkinter, matplotlib, pylab etc. I want to convert .py file to .exe so that I can give it to people to run on windows. However I need to do the following conversion from Ubuntu only. |
Python, tcpServer tcpClient, [WinError 10061] | 37,773,623 | 0 | 0 | 331 | 0 | python,sockets,tcpclient,tcpserver | There are most likely two reasons for that:
1.) Your server application is not listening on that particular ip/port
2.) A firewall is blocking that ip/port
I would recommend checking your firewall settings. You could start with turning your firewall off to determine if it really is a firewall issue.
If so, just add an accept rule for your webservice (ip:port).
edit: And check your routing configuration if you are in a more or less complex network. Make sure that both networks can reach each other (e.g. ping the hosts or try to connect via telnet). | 0 | 1 | 1 | 0 | 2016-06-12T11:12:00.000 | 1 | 1.2 | true | 37,773,568 | 0 | 0 | 0 | 1 | When I try to run tcpServer and tcpClient on the same local network, it works, but I can't run them on the external network. The OS refuses the connection.
Main builtins.ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
I checked whether tcpServer is running or not using netstat, and it is in the listening state.
What am I supposed to do? |
Subsets and Splits