Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
Intellij Subprocess: No such file or directory
40,645,069
1
0
96
0
python,python-2.7,subprocess
The problem was that swipl is under /opt/local/bin/ and Intellij was running in a virtual environment. Changing the python interpreter under configurations seemed to solve it.
0
1
0
0
2016-11-17T00:54:00.000
1
1.2
true
40,645,002
0
0
0
1
I am trying to execute result_b = subprocess.check_output(['swipl']) where swipl is the name of a process. I constantly get the 'No such file or directory' error. However, if I execute that same statement within the python interpreter, it works. What's going on here? Both running in the same directory and both on the same version. I tried all the things that were mentioned in other stack overflow posts, but to no avail. Is this some kind of $PATH problem? result_b = subprocess.check_output(['ls']) does seem to work.
Can we migrate processes across different multiprocessing pools in python?
40,647,910
0
1
395
0
python,process,python-multiprocessing
Once a task is begun by a process in one pool, you can't pause it and add it to another pool. What you could do, however, is have the handler in the first process return a tuple instead of just the value it's computing. The first thing in the tuple could be a boolean representing whether or not the task has finished, and the second thing could be the answer, or partial answer if it's not complete. Then, you could write some additional logic to take any returned values that are marked unfinished, and pass them to the second process, along with the data that's already been computed and returned from the first process. Unfortunately, this will require you to come up with a way to store partial work, which could be very easy or very hard depending on what you're doing.
0
1
0
0
2016-11-17T05:31:00.000
1
0
false
40,647,363
1
0
0
1
I wanna do some task scheduling using python multiprocessing module. I have two pools p1 and p2. One with high priority and one with low priority. A task is first put into the high priority pool. If after a certain amount of time, say 10s, the task still not finishes, i will migrate it to the lower priority pool. The question is can i migrate the task from one pool to another without wasting the work that is already done in the first pool? Basically, i wanna pause a running subprocess in one pool and add it to another pool and then resume it. If the second pool is busy, the task will wait until a free slot is available.
Python subprocess.Popen - How to capture childs backtrace upon abort
40,656,500
0
1
952
0
python,subprocess,stack-trace,backtrace
What you can do is to redirect stdout and stderr of your subprocess.Popen() to a file and later on check that. Doing like that it should be possible to check the backtrace later the "process Termination". Good logging mechanism will give you that :-) Hope this help enough.
0
1
0
0
2016-11-17T13:14:00.000
2
0
false
40,655,912
0
0
0
1
I want to run a process in a loop and if the process returns 0, I must rerun it. If it aborts, I have to capture its stack trace (backtrace). I'm using subprocess.Popen() and .communicate() to run the process. Now .returncode is 134, i.e. child has received SIGABRT, is there any way I can capture the backtrace (stack trace) of child? Since this is a testing tool, I have to capture all the necessary information before I forward it to dev team.
Import Error from binary dependency file
40,664,272
1
1
317
0
python,import,undefined-symbol,dawg
I found the answer for my very specific case, for anyone that may run into this case as well: I am using Anaconda (python 3 version) and installing the package with conda install -c package package worked instead of pip install package. I hope this helps someone.
0
1
0
1
2016-11-17T19:51:00.000
1
0.197375
false
40,663,830
0
0
0
1
I am attempting to run a package after installing it, but am getting this error: ImportError: /home/brownc/anaconda3/lib/python3.5/site-packages/dawg.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _ZTVNSt7__cxx1118basic_stringstreamIcSt11char_traitsIcESaIcEEE The dawg....gnu.so file is binary and so it doesn't give much information when opened in sublime. I don't know enough about binary files in order to go in and remove the line or fix it. Is there a simple fix for this that I am not aware of?
Move up in directory structure
40,666,955
0
1
930
0
python,file,directory,relative-path
Try this one: os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), "config")
0
1
0
1
2016-11-17T23:24:00.000
3
0
false
40,666,853
0
0
0
1
Say I am running a Python script in C:\temp\templates\graphics. I can get the current directory using currDir = os.getcwd(), but how can I use relative path to move up in directories and execute something in C:\temp\config (note: this folder will not always be in C:\)?
How to remove Python in Cent OS?
40,677,984
0
0
210
0
python,linux,centos
Python is required by many of the Linux distributions. Many system utilities the distro providers combine (both GUI based and not), are programmed in Python. The version of python the system utilities are programmed in I will call the "main" python. For Ubuntu 12.04 e.g. this is 2.7.3, the version that you get when invoking python on a freshly installed system. Because of the system utilities that are written in python it is impossible to remove the main python without breaking the system. It even takes a lot of care to update the main python with a later version in the same major.minor series, as you need to take care of compiling it with the same configuration specs as the main python. This is needed to get the correct search paths for the libraries that the main python uses, which is often not exactly what a .configure without options would get you, when you download python to do a python compilation from source. Installing a different version from the major.minor version the system uses (i.e. the main python) normally is not a problem. I.e. you can compile a 2.6 or 3.4 python and install it without a problem, as this is installed next to the main (2.7.X) python. Sometimes a distro provides these different major.minor packages, but they might not be the latest bug-release version in that series. The problems start when you want to use the latest in the main python series (e.g. 2.7.8 on a system with main python version is 2.7.3). I recommend not trying to replace the main python, but instead to compile and install the 2.7.8 in a separate location (mine is in /opt/python/2.7.8). This will keep you on the security fix schedule of your distribution and guarantees someone else tests compatibility of the python libraries and against that version (as used by system utilities!).
0
1
0
1
2016-11-18T12:54:00.000
1
1.2
true
40,677,670
0
0
0
1
How to remove python 2.6 from Cent OS? I tried command yum remove python. After python --version and get again
How can I run a python script on an EC2 instance?
40,711,336
1
1
3,179
0
python,amazon-ec2
If your EC2 instance is running a linux OS, you can use the following command to install python: sudo apt-get install python*.* Where the * represents the version you want to install, such as 2.7 or 3.4. Then use the python command with the first argument as the location of the python file to run it.
0
1
0
1
2016-11-21T00:50:00.000
1
0.197375
false
40,711,286
0
0
0
1
I've ssh'd into my EC2 instance and have the python script and .txt files I'm using on my local system. What I'm trying to figure out is how to transfer the .py and .txt files to my instance and then run them there? I've not even been able to install python on the instance yet
Which linux kernel system calls shows bytes read from disk
40,730,003
0
0
130
0
python,linux,lttng
Handling read, write, pread, pwrite, readv, writev should be enough. You just have to check whether the FD refers to the cache or disk. I think it would be easier in kernelspace, by writing a module, but...
0
1
0
0
2016-11-21T21:42:00.000
1
0
false
40,729,919
0
0
0
1
I have a python program that read Linux kernel system calls (use Lttng), So with this program I could read all kernel calls. I have some operations and then with python program going to analyses system calls, in the operations I have some IO works, then with python program I need to know how many bytes that read from cache and how many read from disk. which system calls show me the bytes read from cache and disk?
Install Scrapy on Mac OS X error SSL pip
40,731,300
0
0
490
0
python,macos,scrapy,pip
Temporarily (just for this module), you could manually install it. Download it from wherever you can, extract it if it is zipped then use python setup.py install
0
1
0
0
2016-11-21T21:46:00.000
2
0
false
40,729,995
0
0
1
1
Good, I am currently trying to install Scrapy in my MacOS but everything is problems, the first thing I introduce in terminal is: pip install scrapy And it returns me: You are using pip version 7.0.1, however version 9.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Requirement already satisfied (use --upgrade to upgrade): scrapy in /usr/local/lib/python2.7/site-packages/Scrapy-1.2.1-py2.7.egg Collecting Twisted>=10.0.0 (from scrapy) Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ConnectTimeoutError(, 'Connection to pypi.python.org timed out. (connect timeout=15)')': /simple/twisted/ Could not find a version that satisfies the requirement Twisted>=10.0.0 (from scrapy) (from versions: ) No matching distribution found for Twisted>=10.0.0 (from scrapy) Seeing the consideration that makes of updating, I realize it ... pip install --upgrade pip And it returns me the following: You are using pip version 7.0.1, however version 9.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Requirement already up-to-date: pip in /usr/local/lib/python2.7/site-packages/pip-7.0.1-py2.7.egg The truth is that yesterday I was doing a thousand tests and gave me another type of error: "SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed" But this last mistake no longer shows me.
NameError: name 'myconf' is not defined
43,085,576
0
0
845
0
python
I fixed this very same problem as follows: I removed models/menu.py, and then the problem appears to be solved. => Remeber delete the models/menu.py
0
1
0
0
2016-11-22T11:43:00.000
2
0
false
40,740,979
0
0
0
1
I am working on web2py image blog. I am unable to understand these errors: Internal error Ticket issued: images/127.0.0.1.2016-11-22.16-44-39.95144250-6a2e-4648-83f9-aa55873e6ae8 Error ticket for "images" Ticket ID 127.0.0.1.2016-11-22.16-44-39.95144250-6a2e-4648-83f9-aa55873e6ae8 type 'exceptions.NameError'> name 'myconf' is not defined Version web2py™ Version 2.14.6-stable+timestamp.2016.05.10.00.21.47 Python Python 2.7.12: /usr/bin/python (prefix: /usr) Traceback (most recent call last): File "/home/sonu/Software/web2py /gluon/restricted.py", line 227, in restricted exec ccode in environment File "/home/sonu/Software/web2py /applications/images/models/menu.py", line 17, in response.meta.author = myconf.get('app.author') NameError: name 'myconf' is not defined
In what circumstances are batch files the right approach?
40,760,885
1
0
85
0
python,batch-file
I'm pretty sure that if I were still doing sysadmin on Windoze systems I would be replacing all but the simplest of BAT files with something -- anything! -- else. Because that particular command language is so awfully deficient in just about everything you need. (Exception handling? Block structured code? Procedures? Parsing text strings and filenames? ) Python is highly readable and "batteries included", so it has to be worthy of consideration as a Windows scripting language. However, Powershell may offer advantages with respect to interfacing to Windows components. It was designed with that in mind. With Python you'd be relying on extra modules which may or may not do all that you need, and might not be as well-supported as Python and its standard libraries. On the other hand, if it's currently done with BAT it's probably not that sophisticated! The main argument against change is "if it ain't broke, don't fix it". So if the BAT scripts do all you need and no changes are needed, then don't make any. Not yet. (Sooner later a change will be needed, and that is the time to consider scrapping the BAT script and starting again).
0
1
0
0
2016-11-23T08:17:00.000
2
1.2
true
40,759,163
1
0
0
2
I've inherited various task with moderately confusing batch files. I intend to rewrite them as Python scripts, if only so I can thoroughly see what they're doing, as can my successors. I honestly don't see why anyone would have more than a startling simple (under five line, do a, b, c, d, e in order) batch file: proper logic deserves a more modern approach, surely? But am I missing something? Is there a real advantage in still using such an approach? If so, what is it? Clarified in response to comments: Windows .bat fils, that check for existence of files in certain places and move them around and then invoke other programs based on what was found. I guess what I'm really asking is, is it normal good practice to still create batch files for this sort of thing, or is it more usual to follow a different approach?
In what circumstances are batch files the right approach?
40,759,669
2
0
85
0
python,batch-file
I'm not sure this is the correct place for such question but anyway.. Batch files (with very slight reservations) can be ran on every windows machine since windows NT (where the cmd.exe was introduced). This is especially valuable when you have to deal with old machines running windows xp or 2003.(still) The most portable language between the windows machines. Batch files are very fast. Especially compared to powershell. Easier to call. While wsh by default is called with wscript.exe (which output is in a cumbersome pop up windows) and powershell need a policy parameters and by default double clicks won't work on powershell scripts. I event don't think you need python except if you are aiming multi-platform scripting (i.e. macs or linux machines). Windows comes with other powerful enough languages like vbscript and javascript (since XP) . C# , Visual Basic and jscript.net (since Vista) and powershell (since windows 7). And even when you want multi platform scripts .net based languages are highly considerable as Miscrosoft already offers an official support for .net for unix. Though the best choices probably are powershell and c# as visual basic and jscript.net are in maintenance mode , though the jscript options (jscript and jscript.net) are based on javascript which in the moment is the more popular language and investing in it will worth it. By the way all languages coming packed with the Windows by default can be elegantly wrapped into batch file.
0
1
0
0
2016-11-23T08:17:00.000
2
0.197375
false
40,759,163
1
0
0
2
I've inherited various task with moderately confusing batch files. I intend to rewrite them as Python scripts, if only so I can thoroughly see what they're doing, as can my successors. I honestly don't see why anyone would have more than a startling simple (under five line, do a, b, c, d, e in order) batch file: proper logic deserves a more modern approach, surely? But am I missing something? Is there a real advantage in still using such an approach? If so, what is it? Clarified in response to comments: Windows .bat fils, that check for existence of files in certain places and move them around and then invoke other programs based on what was found. I guess what I'm really asking is, is it normal good practice to still create batch files for this sort of thing, or is it more usual to follow a different approach?
Virtualenv: pip not installing Virtualenv in the correct directory
40,764,718
0
0
1,861
0
python,pip,virtualenv,sudo
Try adding /usr/local/share/python in /etc/launchd.conf and ~/.bashrc. This might resolve the issue you are facing.
0
1
0
0
2016-11-23T12:06:00.000
3
0
false
40,764,134
1
0
0
1
Whenever I try running virtualenv, it returns command not found. Per recommendations in other posts, I have tried installing virtualenv with both $ pip install virtualenv and $ sudo pip install virtualenv. I have uninstalled and tried again multiple times. I think the issue is that I am using OSX, and pip is installing virtualenv in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages. As I understand, it should be installed in /usr/local/bin/. How can I install virtualenv there?
Which language is best for automating installing installers on windows and linux machines?
40,837,259
0
0
118
0
python,automation
For Command Line installations which does not involve user-interaction use a shell script or python code calling those shell commands. For Command Line installations which involve user-interaction use expect scripts or python's pexpect which does the same thing. Now, Wizard automation could be one using Robot Class Java or SendKeys Library in Python which generate keyboard events for you. To make it more foolproof you can track the installation Logs simultaneously or to keep track of errors, I would recommend you take a screenshot at each wizard screen which could help you debug later on. Hope it helps!
0
1
0
0
2016-11-24T10:11:00.000
1
0
false
40,783,510
1
0
0
1
Our company provide installers that needs to be installed on windows and linux machines.I have to automate this wherein these installers get installed by some script thus minimizing manual intervention. I have come across python for this as it will be a generic solution for both windows and linux.AutoIt is specific to windows so ignoring it. Are there another languages whose libraries are that strong to perform above task(handle OS dialogs)..
Integrating GAE Search API with Datatstore
40,816,106
1
0
68
0
python,google-app-engine,google-cloud-datastore,google-search-api
There is no first class support for this, your best bet is to make the document id match the datastore key and route all put/get/search requests through a single DAO/repository tier to ensure some level of consistency. You can use parallel Async writes to keep latency down, but there's not much you can do about search not participating in transactions. It also has no defined consistency, so assume it is eventual, and probably much slower than datastore index propagation.
0
1
0
0
2016-11-25T21:27:00.000
2
1.2
true
40,812,470
0
0
1
2
When a document is stored into both the Cloud datastore and a Search index, is it possible when querying from the Search index, rather than returning the Search index documents, returning each corresponding entity from the Cloud datastore instead? In other words, I essentially want my search query to return what a datastore query would return. More background: When I create an entity in the datastore, I pass the entity id, name, and description parameters. A search document is built so that its doc id is the same as the corresponding entity id. The goal is to create a front-end search implementation that will utilize the full-text search api to retrieve all relevant documents based on the text query. However, I want to return all details of that document, which is stored in the datastore entity. Would the only way to do this be to create a key for each search doc_id returned from the query, and then use get_multi(keys) to retrieve all relevant datastore entities?
Integrating GAE Search API with Datatstore
40,820,702
0
0
68
0
python,google-app-engine,google-cloud-datastore,google-search-api
You can store any information that you need in the Search API documents, in addition to their text content. This will allow you to retrieve all data in one call at the expense of, possibly, storing some duplicate information both in the Search API documents and in the Datastore entities. Obviously, having duplicate data is not ideal, but it may be a good option for rarely changing data (e.g. document timestamp, author ID, title, etc.) as it can offer a significant performance boost.
0
1
0
0
2016-11-25T21:27:00.000
2
0
false
40,812,470
0
0
1
2
When a document is stored into both the Cloud datastore and a Search index, is it possible when querying from the Search index, rather than returning the Search index documents, returning each corresponding entity from the Cloud datastore instead? In other words, I essentially want my search query to return what a datastore query would return. More background: When I create an entity in the datastore, I pass the entity id, name, and description parameters. A search document is built so that its doc id is the same as the corresponding entity id. The goal is to create a front-end search implementation that will utilize the full-text search api to retrieve all relevant documents based on the text query. However, I want to return all details of that document, which is stored in the datastore entity. Would the only way to do this be to create a key for each search doc_id returned from the query, and then use get_multi(keys) to retrieve all relevant datastore entities?
Running a python program in Windows?
40,829,247
0
0
1,359
0
python,cmd,interpreter
The simplest way would be to just do the following in cmd: C:\path\to\file\test.py Windows recognizes the file extension and runs it with Python. Or you can change the directory to where the Python program/script is by using the cd command in the command prompt: cd C:\path\to\file Start Python in the terminal and import the script using the import function: import test You do not have to specify the .py file extension. The script will only run once per process so you'll need to use the reload function to run it after it's first import. You can make python run the script from a specific directory: python C:\path\to\file\test.py
0
1
0
0
2016-11-27T12:54:00.000
3
1.2
true
40,829,181
1
0
0
2
I have Python 3.6 and Windows 7. I am able to successfully start the python interpreter in interactive mode, which I have confirmed by going to cmd, and typing in python, so my computer knows how to find the interpreter. I am confused however, as to how to access files from the interpreter. For example, I have a file called test.py (yes, I made sure the correct file extension was used). However, I do not know how to access test.py from the interpreter. Let us say for the sake of argument that the test.py file has been stored in C:\ How then would I access test.py from the interpreter?
Running a python program in Windows?
40,829,271
0
0
1,359
0
python,cmd,interpreter
In command prompt you need to navigate to the file location. In your case it is in C:\ drive, so type: cd C:\ and then proceed to run your program: python test.py or you could do it in one line: python C:\test.py
0
1
0
0
2016-11-27T12:54:00.000
3
0
false
40,829,181
1
0
0
2
I have Python 3.6 and Windows 7. I am able to successfully start the python interpreter in interactive mode, which I have confirmed by going to cmd, and typing in python, so my computer knows how to find the interpreter. I am confused however, as to how to access files from the interpreter. For example, I have a file called test.py (yes, I made sure the correct file extension was used). However, I do not know how to access test.py from the interpreter. Let us say for the sake of argument that the test.py file has been stored in C:\ How then would I access test.py from the interpreter?
Module not found in python after installing in terminal
53,305,576
0
0
5,785
0
python,pip,splinter
I had the same issue, I uninstalled and reinstalled splinter many times but that didn't work. Then I typed source activate (name of my conda environment) and then did pip install splinter. It worked for me.
0
1
1
0
2016-11-27T13:44:00.000
3
0
false
40,829,645
1
0
0
1
This question has been asked a few times, but the remedy appears to complicated enough that I'm still searching for a user specific solution. I recently re-installed anaconda; now, after entering "pip install splinter" in the Terminal on my Mac I get the response: "Requirement already satisfied: splinter in /usr/local/lib/python2.7/site-packages Requirement already satisfied: selenium>=2.53.6 in /usr/local/lib/python2.7/site-packages (from splinter)" But, I get the following error in python (Anaconda) after entering import splinter Traceback (most recent call last): File "", line 1, in import splinter ImportError: No module named splinter" When I enter which python in the terminal, this is the output: "/usr/local/bin/python" I am editing the question here to add the solution: ~/anaconda2/bin/pip install splinter
Automatically add command line options to setup.py based on target
40,848,259
0
1
73
0
python,setup.py
Turned out the "correct" way to do it was pretty straight forward and I just missed it when looking in the documentation: Use setup.cfg. It's a standard config-file, where you can define a section for each build-target / kind of distribution (sdist, bdist_wheel, bdist_wininst, etc.), which contains the command line options you want to give to setup.py when building it.
0
1
0
0
2016-11-27T18:08:00.000
2
1.2
true
40,832,168
0
0
0
1
When I run setup.py, I generally want to add different command line options to the call, based on which kind of distribution I'm building. For example I want to add --user-access-control force if I build a windows installer (bdist_wininst). Another example would be omitting the call to a post-install-script when building a source distribution. My current solution would be to create small .bat and .sh scripts with the desired call to setup.py, but that feels somehow wrong. Is there a better way to do what I want, or are my instincts failing me? Edit: Found the correct way. See my answer below.
python3: how to install python3-dev locally?
40,840,607
-1
2
2,282
0
python,python-3.x,ubuntu
I think you have to build it yourself from source... You can easily find a guide for that if you google it.
0
1
0
0
2016-11-28T09:03:00.000
2
-0.099668
false
40,840,480
1
0
0
1
I would like to install my own package in my local dir (I do not have the root privilege). How to install python3-dev locally? I am using ubuntu 16.04.
Config the opencv in python 2.7 of MacOS
40,842,338
0
0
30
0
python,macos,opencv
copy cv2.so and cv.py to /System/Library/Frameworks/Python.framework/Versions/2.7/lib/ You can find this two files in /usr/local/Cellar/opecv/../lib/python2.7
0
1
0
0
2016-11-28T09:18:00.000
1
0
false
40,840,738
1
1
0
1
Today I was installing the opencv by homebrew install opencv and then I try to import it by: python import cv2 and it return: No moudule named cv2 However, I try to import it by: Python3 Import cv2 it works well. I tried to install opencv again but homebrew said it has been installed. Dont know what can I do now
Python 3.6.0b4 amd64 - pywin32-220.win-amd64-py3.6 can't find python 3.6-32
44,041,940
0
1
4,192
0
python,pywin32,python-3.6
Simply rename HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.6-32 To: HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.6 This worked for Python 3.6.1 as well. Taken from the link above.
0
1
0
0
2016-11-30T01:58:00.000
2
0
false
40,879,007
1
0
0
1
I just intstalled python 3.6.0b4 (default, Nov 22 2016) amd64 on my Win 7 computer. When I try to install pywin32-220.win-amd64-py3.6 I get the error message Python version 3.6-32 required, which was not found in the registry. Python version 3.6-32 sounds like the 32bit version, which seems inappropriate. Perhaps I misunderstand. I've seen posts about a similar problem installing pywin 3.5-32, but none relating to 3.6b4 or the 64 bit version. How do I fix this?
What's Python's equivalent to C's read function?
40,895,004
3
1
1,069
0
python,c
By "Python's read" I assume you mean the read method of file objects. That method is closer in spirit to C's fread: it implements buffering and it tries to satisfy the requested amount, unless that is impossible due to an IO error or end-of-file condition. If you really need to call the read() function available in many C environments, you can call os.read() to invoke the underlying C function. The only difference is that it returns the data read as a byte string, and it raises an exception in the cases when the C function would return -1. If you call os.read(), remember to give it the file descriptor obtained using the fileno method on file objects, or returned by functions in the os module such as os.open, os.pipe, etc. Also remember not to mix calls to os.open() and file.open(), since the latter does buffering and can cause later calls to os.open() not to return the buffered data.
0
1
0
1
2016-11-30T17:47:00.000
1
1.2
true
40,894,943
0
0
0
1
C's read: The read() function shall attempt to read nbyte bytes from the file associated with the open file descriptor, fildes, into the buffer pointed to by buf. Upon successful completion, these functions shall return a non-negative integer indicating the number of bytes actually read. Otherwise, the functions shall return −1 and set errno to indicate the error. Python's read: Read at most n characters from stream. Read from underlying buffer until we have n characters or we hit EOF. If n is negative or omitted, read until EOF. Bold fonts are mine. Basically Python will insist on finding EOF if currrently available data is less than buffer size... How to make it simply return whatever is available?
GCloud App needs FTP - do I need a VM or can I create an FTP app?
40,904,184
0
0
50
0
php,python,google-app-engine,ftp
App Engine projects are not based on server virtual machines. App Engine is a platform as a service, not infrastructure as a service. Your code is packaged up and served on Google App Engine in a manner that can scale easily. App Engine is not a drop-in replacement for your old school web hosting, its quite a bit different. That said, FTP is just a mechanism to move files. If your files just need to be processed by a job, you can look at providing an upload for your users where the files end up residing on Google Cloud Storage and then your cron job reads from that location and does any processing that is needed. What results from that processing might result in further considerations. Don't look at FTP being a requirement, but rather a means to moving files and you'll probably have plenty of options.
0
1
0
0
2016-12-01T03:40:00.000
1
1.2
true
40,902,238
0
0
1
1
I'm running a PHP app on GCloud (Google App Engine). This app will require users to submit files for processing via FTP. A python cron job with process them. Given that dev to prod is via the GAE deployment, I'm assuming there is no FTP access to the app folder structure. How would I go about providing simple one-way FTP to my users? Can I deploy a Python project that will be a server? Or do I need to run a VM? I've done some searching which suggests the VM option, but surely there are other options?
How to solve the ImportError: No module named oursql error
40,905,073
0
1
1,134
0
python,macos,pip
Finally got it work after I symlink with brew's python It was not symlinked into /usr/local The command is simply brew link python and now which python will point to /usr/local/bin/python
0
1
0
0
2016-12-01T04:17:00.000
2
0
false
40,902,567
1
0
0
1
I used brew to install python 2.7 and now my mac have 2 python version on in /usr/bin/python and another on in /usr/local/Cellar/python/2.7.12_2/ pip installed oursql to /usr/local/lib/python2.7/site-packages what should i do about it?
Linking python virtual environment to eclipse
40,910,909
2
1
1,333
0
python,eclipse,virtualenv,pydev
Not sure... by default, any run will get the 'default' interpreter (which is the first interpreter in Preferences > PyDev > Interpreters > Python interpreter -- you may reorder those using the up/down button in that screen). Now, that's the default, you may also configure to use a different interpreter per project (select project > alt+Enter for its properties > PyDev - Interpreter/Grammar > Interpreter). Or you can choose a different one per launch: Menu > Run > Run Configurations > Select launch > Interpreter. Also, you may want to double check to make sure that the paths in the interpreter configuration window (Preferences > PyDev > Interpreters > Python interpreter > select interpreter) actually map to the proper site-packages/external libs you expect.
0
1
0
0
2016-12-01T09:14:00.000
1
0.379949
false
40,906,584
1
0
0
1
I have Python2.7 installed and Python3.5 installed on my windows machine.These are at locations C:\Python27 and C:\Python35-32. Both these are added in System Path environment variables and can be accessed from any directory. Now i create a virtualenv in Python35-32 directory successfully under a sub-directory CODING_LABS. I try to link/point my Eclipse python interpreter to the python.exe file contained in CODING_LABS. This is done OK. However, when i run my script from eclipse,it still points to Python27.Unable to figure out why?
Discover path to python version used to make virtualenv
40,916,124
1
0
137
0
python,virtualenv
Since virtualenv copies python completely (including the binary) there is no way to know the exact path it originated from. However, you can easily find the version by running ./python --version inside the environment's bin folder.
0
1
0
0
2016-12-01T16:33:00.000
2
0.099668
false
40,915,789
1
0
0
1
I have an old computer with dozens of old python projects installed, each with its own different virtualenv, and many of which built with different versions of python. I'd prefer not to have to download these different versions when I create new virtualenvs via virtualenv -p whatever path that version of python has My question is: within a virtualenv, is there a command I can run to find the path to the version of python which was used to create that particular environment? For example, if I created a venv with 'virtualenv -p /usr/bin/python3.4' and then ran this command with the venv activated, it would return '/usr/bin/python3.4'
Optimize Async Tornado code. Minimize the thread lock
40,922,571
2
0
148
0
python,multithreading,asynchronous,couchdb,tornado
Use AsyncHTTPClient or CurlAsyncHTTPClient. Since the "requests" library is synchronous, it blocks the Tornado event loop during execution and you can only have one request in progress at a time. To do asynchronous networking operations with Tornado requires purpose-built asynchronous network code, like CurlAsyncHTTPClient. Yes, CurlAsyncHTTPClient is a bit faster than AsyncHTTPClient, you may notice a speedup if you stream large amounts of data with it. async and await are faster than gen.coroutine and yield, so if you have yield statements that are executed very frequently in a tight loop, or if you have deeply nested coroutines that call coroutines, it will be worthwhile to port your code.
0
1
0
0
2016-12-01T20:30:00.000
1
1.2
true
40,919,809
0
0
0
1
How can I minimize the thread lock with Tornado? Actually, I have already the working code, but I suspect that it is not fully asynchronous. I have a really long task. It consists of making several requests to CouchDB to get meta-data and to construct a final link. Then I need to make the last request to CouchDB and stream a file (from 10 MB up to 100 MB). So, the result will be the streaming of a large file to a client. The problem that the server can receive 100 simultaneous requests to download large files and I need not to lock thread and keep recieving new requests (I have to minimize the thread lock). So, I am making several synchronous requests (requests library) and then stream a large file with chunks with AsyncHttpClient. The questions are as follows: 1) Should I use AsyncHTTPClient EVERYWHERE? Since I have some interface it will take quite a lot of time to replace all synchronous requests with asynchronous ones. Is it worth doing it? 2) Should I use tornado.curl_httpclient.CurlAsyncHTTPClient? Will the code run faster (file download, making requests)? 3) I see that Python 3.5 introduced async and theoretically it can be faster. Should I use async or keep using the decorator @gen.coroutine?
External executable crashes when being launched from Python script
40,985,274
0
0
146
0
python,python-2.7,subprocess,external,executable
As I've not had any response I've kind of gone down a different route with this. Rather than relying on the subprocess module to call the exe I have moved that logic out into a batch file. The xmls are still modified by the python script and most of the logic is still handled in script. It's not what ideally would have liked from the program but it will have to do. Thanks to anybody who gave this some thought and tried to at least look for an alternative. Even if nobody answered.
0
1
0
0
2016-12-02T10:50:00.000
1
1.2
true
40,930,450
0
0
0
1
I am currently getting an issue with an external executable crashing when it is launched from a Python script. So far I have tried using various subprocess calls. As well as the more redundant methods such as os.system and os.startfile. Now the exe doesn't have this issue when I call it normally from the command line or by double-clicking on it from the explorer window. I've looked around to see if other people have had a similar problem too. As far as I can tell the closest possible cause of this issue is that the child process unnecessarily hangs due to the I/O exceeding 65K. So I've tried using Popen without PIPES and I have also changed the stdout and stdin to write to temporary files to try and alleviate my problem. But unfortunately none of this has worked. What I eventually want to do is be able to autorun this executable several times with various outputs provided by xmls. Everything else is pretty much in place, including the xml modifications which the executable requires. I have also tested the xml modification portion of the code as a standalone script to make sure that this isn't the issue. Due to the nature of script I am a bit reluctant to put up any actual code up on the net as the company I work for is a bit strict when it comes to showing code. I would ask my colleagues if I could but unfortunately I'm the only one here who actually has used python. Any help would be much appreciated. Thanks.
total memory used by running python code
40,947,523
0
0
494
0
python,memory,memory-management
You can just open task manager and look how much ram does it take. I use Ubuntu and it came preinstalled.
0
1
0
0
2016-12-03T11:46:00.000
3
0
false
40,947,387
1
0
0
1
I am running python program using Linux operating system and i want to know how much total memory used for this process. Is there any way to determine the total memory usage ?
nohup: failed to run command
40,949,085
1
0
1,657
0
python,linux,debian
You gave nohup one single argument containing spaces and quotes, and it failed to find a command with that name. Split it so the command is openvpn, with two more arguments (you'll probably find the extra quotes around the last argument shouldn't be there either). Sometimes this job is left to a shell, as with the system function, but that is in general riskier (similar to SQL injection) and inefficient (running another process for a trivial task).
0
1
0
0
2016-12-03T14:45:00.000
1
1.2
true
40,948,991
0
0
0
1
For some strange reason when i run a python script with: subprocess.Popen(["nohup", "openvpn --config '/usr/local/etc/openvpn/pia_openvpn/AU Melbourne.ovpn'"]) I get nohup: failed to run command ‘openvpn --config '/usr/local/etc/openvpn/pia_openvpn/AU Melbourne.ovpn'’: No such file or directory I can run openvpn --config "/usr/local/etc/openvpn/pia_openvpn/AU Melbourne.ovpn" with no errors. I've also tried running other commands and get the exact same error.
Freeze_support what is needed?
40,957,775
1
2
1,868
0
python,python-multiprocessing
Yes, you understand correctly. According to documentation, this method only needed to maintain the multiprocessing module in working condition under frozen exe under Windows.
0
1
0
0
2016-12-04T10:17:00.000
1
1.2
true
40,957,552
1
0
0
1
Am I understand correctly, that multiprocessing.freeze_support() need only to compile .py script to .exe in windows? Or is it used in other things?
Ensuring at most a single instance of job executing on Kubernetes and writing into Postgresql
40,968,608
1
0
1,116
1
python,postgresql,mutex,kubernetes,distributed-system
A completely different approach would be to run a (web) server that executes the job functionality. At a high level, the idea is that the webserver can contact this new job server to execute functionality. In addition, this new job server will have an internal cron to trigger the same functionality every 2 hours. There could be 2 approaches to implementing this: You can put the checking mechanism inside the jobserver code to ensure that even if 2 API calls happen simultaneously to the job server, only one executes, while the other waits. You could use the language platform's locking features to achieve this, or use a message queue. You can put the checking mechanism outside the jobserver code (in the database) to ensure that only one API call executes. Similar to what you suggested. If you use a postgres transaction, you don't have to worry about your job crashing and the value of the lock remaining set. The pros/cons of both approaches are straightforward. The major difference in my mind between 1 & 2, is that if you update the job server code, then you might have a situation where 2 job servers might be running at the same time. This would destroy the isolation property you want. Hence, database might work better, or be more idiomatic in the k8s sense (all servers are stateless so all the k8s goodies work; put any shared state in a database that can handle concurrency). Addressing your ideas, here are my thoughts: Find a setting in k8s that will limit this: k8s will not start things with the same name (in the metadata of the spec). But anything else goes for a job, and k8s will start another job. a) etcd3 supports distributed locking primitives. However, I've never used this and I don't really know what to watch out for. b) postgres lock value should work. Even in case of a job crash, you don't have to worry about the value of the lock remaining set. Querying k8s API server for things that should be atomic is not a good idea like you said. I've used a system that reacts to k8s events (like an annotation change on an object spec), but I've had bugs where my 'operator' suddenly stops getting k8s events and needs to be restarted, or again, if I want to push an update to the event-handler server, then there might be 2 event handlers that exist at the same time. I would recommend sticking with what you are best familiar with. In my case that would be implementing a job-server like k8s deployment that runs as a server and listens to events/API calls.
0
1
0
0
2016-12-04T11:28:00.000
1
1.2
true
40,958,107
0
0
0
1
I have a Python program that I am running as a Job on a Kubernetes cluster every 2 hours. I also have a webserver that starts the job whenever user clicks a button on a page. I need to ensure that at most only one instance of the Job is running on the cluster at any given time. Given that I am using Kubernetes to run the job and connecting to Postgresql from within the job, the solution should somehow leverage these two. I though a bit about it and came with the following ideas: Find a setting in Kubernetes that would set this limit, attempts to start second instance would then fail. I was unable to find this setting. Create a shared lock, or mutex. Disadvantage is that if job crashes, I may not unlock before quitting. Kubernetes is running etcd, maybe I can use that Create a 'lock' table in Postgresql, when new instance connects, it checks if it is the only one running. Use transactions somehow so that one wins and proceeds, while others quit. I have not yet thought this out, but is should work. Query kubernetes API for a label I use on the job, see if there are some instances. This may not be atomic, so more than one instance may slip through. What are the usual solutions to this problem given the platform choice I made? What should I do, so that I don't reinvent the wheel and have something reliable?
Yet another confustion about sending/recieving large amount of data over (unix-) socket
40,968,779
2
0
502
0
python,c++,sockets,unix-socket
It sounds like a design flaw that you need to send this much data over the socket to begin-with and that there is this risk of the reader not keeping up with the writer. As an alternative, you may want to consider using a delta-encoding, where you alternate between "key frame"s (whole frames) and multiple frames encoded as deltas from the the prior frame. You may also want to consider writing the data to a local buffer and then, on your UNIX domain socket, implementing a custom protocol that allows reading a sequence of frames starting at a given timestamp or a single frame given a timestamp. If all reads go through such buffer rather than directly from the source, I imagine you could also add additional encoding / compression options in that protocol. Also, if the server application that exports the data to a UNIX socket is a separate application from the one that is reading in the data and writing it to a buffer, you won't need to worry about your data ingestion being blocked by slow readers.
0
1
0
0
2016-12-05T06:51:00.000
2
0.197375
false
40,968,598
0
0
0
1
I have a C++ program which reads frames from a high speed camera and write each frame to a socket (unix socket). Each write is of 4096 bytes. Each frame is roughly 5MB. ( There is no guarantee that frame size would be constant but it is always a multiple of 4096 bytes. ) There is a python script which reads from the socket : 10 * 4096 bytes at each call of recv. Often I get unexpected behavior which I think boils down to understand the following about the sockets. I believe both of my programs are write/recving in blocking mode. Can I write whole frame in one go (write call with 5MB of data)? Is it recommended? Speed is major concern here. If python client fails to read or read slowly than write, does it mean that after some time write operation on socket would not add to buffer? Or, would they overwrite the buffer? If no-one is reading the socket, I'd not mind overwriting the buffer. Ideally, I'd like my application to write to socket as fast as possibly. If no one is reading the data, then overwriting is fine. If someone is reading the data from socket but not reading fast enough, I'd like to store all data in buffer. Then how can I force my socket to increase the buffer size when reading is slow?
Keeping Python Variables between Script Calls
40,969,773
4
1
2,241
0
python,variables,memory,ipc,ram
Make it a (web) microservice: formalize all different CLI arguments as HTTP endpoints and send requests to it from main application.
0
1
0
0
2016-12-05T08:09:00.000
4
0.197375
false
40,969,733
1
0
0
1
I have a python script, that needs to load a large file from disk to a variable. This takes a while. The script will be called many times from another application (still unknown), with different options and the stdout will be used. Is there any possibility to avoid reading the large file for each single call of the script? I guess i could have one large script running in the background that holds the variable. But then, how can I call the script with different options and read the stdout from another application?
How to import IBM Bluemix Watson speech-to-text in Choregraphe?
41,919,816
0
0
272
0
python,ibm-cloud,nao-robot
You can add any path to the PYTHONPATH environment variable from within your behavior. However, this has bad side effects, like: If you forget to remove the path from the environment right after importing your module, you won't know anymore where you are importing modules from, since there is only one Python context for the whole NAOqi and all the behaviors. For the same reason (a single Python context), you'll need to restart NAOqi if you change the module you are trying to import.
0
1
0
1
2016-12-06T07:26:00.000
1
0
false
40,989,958
0
0
0
1
Currently, I am doing a project about Nao Robot. I am having problem with importing the python class file into choregraphe. So anyone knows how to do this? Error message [ERROR] behavior.box :init:8 _Behavior__lastUploadedChoregrapheBehaviorbehavior_127183361‌​6__root__RecordSound‌​_3__RecSoundFile_4: ALProxy::ALProxy Can't find service:
Cannot find output folder for .exe file using pyinstaller
56,047,956
2
2
2,898
0
python,pyinstaller
If you set your command directory to the .py script location and run pyinstaller yourscript.py, it will generate folders in the same location as your script. The folder named dist/ will contain the .exe file.
0
1
0
0
2016-12-06T08:52:00.000
2
0.197375
false
40,991,259
1
0
0
1
I am completely new to python and trying to create an application (or .exe) file for python using pyinstaller. I ran the command pyinstaller -[DIRECTORY].py and it saved it to an output directory "C:\Windows\System32\Dist\Foo", however when i tried to locate the directory it did not seem to exist (Dist). NOTE: i'm trying to convert a .py file to .exe file in Python 3.5 Thanks for any help :)
For distributing calculation task, which is better celery or spark
41,021,060
2
2
3,004
0
python,apache-spark,celery,distributed,jobs
Adding to the above answer, there are other areas also to identify. Integration with the existing big data stack if you have. Data pipeline for ingestion You mentioned "backend for web application". I assume its for read operation. The response times for any batch application might not be a good fit for any web application. Choice of streaming can help you get the data into the cluster faster. But it will not guarantee the response times needed for web app. You need to look at HBase and Solr(if you are searching). Spark is undoubtedly better and faster than other batch frameworks. In streaming there may be few other. As I mentioned above, you should consider the parameters on which your choice is made.
0
1
0
0
2016-12-07T06:09:00.000
2
0.197375
false
41,010,560
0
1
1
2
Problem: calculation task can be paralleled easily. but it is needed real-time response. There can be two approaches. 1. using Celery: runs job in parallel from scratch 2. using Spark: runs job in parallel with spark framework I think spark is better in scalability perspective. But is it OK Spark as backend of web-application?
For distributing calculation task, which is better celery or spark
41,012,633
1
2
3,004
0
python,apache-spark,celery,distributed,jobs
Celery :- is really a good technology for distributed streaming And its supports Python language . Which is it self strong in computation and easy to write. The streaming application in Celery supports so many features as well . Its little over head on CPU. Spark- Its supports various programming language Java,Scala,Python. its not pure streaming its micro batch streaming as per the Spark documentation If your task can only be full filled by streaming and you dont need the SQl like feature . Then Celery will be the best. But you need various feature along with streaming then SPark will be better . In that case you can take scenario you application will generate the data in how many batches within second .
0
1
0
0
2016-12-07T06:09:00.000
2
1.2
true
41,010,560
0
1
1
2
Problem: calculation task can be paralleled easily. but it is needed real-time response. There can be two approaches. 1. using Celery: runs job in parallel from scratch 2. using Spark: runs job in parallel with spark framework I think spark is better in scalability perspective. But is it OK Spark as backend of web-application?
raspberry pi : Auto run GUI on boot
41,022,529
3
2
7,947
0
python,user-interface,terminal,raspberry-pi
Without knowing you Pi setup it's a bit difficult. But with the assumption you're running raspbian with its default "desktop" mode: Open a terminal on your Pi, either by sshing to it or connecting a monitor/keyboard. First we need to allow you to login automatically, so sudo nano /etc/inittab to open the inittab for editing. Find the line 1:2345:respawn:/sbin/getty 115200 tty1 and change it to #1:2345:respawn:/sbin/getty 115200 tty1 Under that line, add 1:2345:respawn:/bin/login -f pi tty1 </dev/tty1 >/dev/tty1 2>&1. Type Ctrl+O and then Ctrl+X to save and exit Next, we can edit the rc.local. sudo nano /etc/rc.local Add a line su -l pi -c startx (replacing pi with the username you want to launch as) above the exit 0 line. This will launch X on startup, which allows other applications to use graphical interfaces. Add the command you'd like to run below the previous line (e.g python /path/to/mycoolscript.py &), but still above the exit 0 line. Note the & included here. This "forks" the process, allowing other commands to run even if your script hasn't exited yet. Ctrl+O and Ctrl+X again to save and exit. Now when you power on your Pi, it'll automatically log in, start X, and then launch the python script you've written! Also, my program requires an internet connection on execution but pi connects to wifi later and my script executes first and ends with not connecting to the internet. This should be solved in the script itself. Create a simple while loop that checks for internet access, waits, and repeats until the wifi connects.
0
1
0
1
2016-12-07T15:17:00.000
2
0.291313
false
41,021,109
0
0
0
1
I want to run a python script which executes a GUI on startup(as pi boots up). But I don't see any GUI on screen but when I open terminal my program executes automatically and GUI appears. Also, my program requires an internet connection on execution but pi connects to wifi later and my script executes first and ends with not connecting to the internet. Is there any way my python script executes after pi boots up properly and pi connected with internet
Add more Python libraries
41,116,628
2
2
360
0
python,azure-data-lake,u-sql
Assuming the libs work with the deployed Python runtime, try to upload the libraries into a location in ADLS and then use DEPLOY RESOURCE "path to lib"; in your script. I haven't tried it, but it should work.
0
1
0
0
2016-12-08T17:35:00.000
2
0.197375
false
41,045,491
0
1
0
1
Is it or will it be possible to add more Python libraries than pandas, numpy and numexpr to Azure Data Lake Analytics? Specifically, we need to use xarray, matplotlib, Basemap, pyresample and SciPy when processing NetCDF files using U-SQL.
Google App Engine SDK path in linux for pycharm?
41,059,077
3
2
1,232
0
python,google-app-engine,pycharm,google-cloud-platform
The correct path to use is the platform/google_appengine/ directory, within the google-cloud-sdk installation directory,
0
1
0
0
2016-12-09T11:09:00.000
2
1.2
true
41,059,076
0
0
1
1
While configuring pycharm (professional) for Google App engine it asks for App Engine SDK path, while google now gives everything bundled in a Google cloud SDK. On choosing the cloud SDK directory, pycharm is saying it's invalid. What is the correct path for Google App engine SDK?
Make message sending to RabbitMQ atomic
41,069,086
0
0
134
0
python,python-3.x,rabbitmq
Yes, there is - wrap all messages in response to a request as a transaction.
0
1
0
1
2016-12-09T15:59:00.000
1
0
false
41,064,321
0
0
0
1
I am using a python script (3.5.2) and a RabbitMQ worker queue to process data. There is a queue that is filled with user requests of an external system. These user requests will be processed by my python script, each user request results in several output messages. I use the acknoledge functionality to ensure that the incoming message will be deleted only after processing it. This ensures that the message will be reassigned if the worker occasionally dies. But if the worker dies during sending out messages it could be possible that some messages of this user request are already sent to the queue and others wont be sent. Is there a way to send several messages atomically, i. e. sent all messages or none?
Avoid typing "python" in a terminal to open a .py script?
41,082,125
1
2
279
0
python,linux,terminal
you will need to chmod 0755 script.py and as a first line in script have something like #!/usr/bin/python
0
1
0
0
2016-12-11T01:26:00.000
3
0.066568
false
41,082,106
0
0
0
1
Whilst running python scripts from my linux terminal, I find myself typing in python myfile.py way too much. Is there a way on linux (or windows) to execute a python script by just entering in the name of the script, as is possible with bash/sh? like ./script.py?
UniCurses pdcurses.dll Error
49,979,004
1
3
898
0
python,pdcurses,unicurses
To allow import, pdcurses.dll needs to be located in the python folder, for example C:\python36. To run a python script which imports and executes unicurses modules, the pdcurses.dll needs to be located in the same folder as the python script you are executing, so it needs to be located in 2 places.
0
1
0
0
2016-12-12T21:25:00.000
1
0.197375
false
41,109,851
1
0
0
1
When trying to import the UniCurses package, I receive the error "UniCurses initialization error - pdcurses.dll not found." I have downloaded the pdcurses distributions (specifically pdc34dllw.zip) and extracted the files to: *\Python\Lib\site-packages (Where unicurses.py is located) *\Python\Lib\site-packages\unicurses *\Python None of these have solved the problem.
Writing data to remote VPS database
41,165,424
0
0
107
1
python,postgresql,pandas,vps
After investigating countless possible solutions: Creating a tunnel to forward a port from my local machine to the server so it can access the 3rd party app. modifying all my python code to manually insert the data from my local machine to the server using psycopg2 instead of pandas to_sql Creating a docker container for the 3rd party app that can be run on the server and several other dead ends or convoluted less than ideal solutions In the end, the solution was to simply install the 3rd party app on the server using wine but then ssh into it using the -X flag. I can therefore access the gui on my local machine while it is running on the server.
0
1
0
0
2016-12-13T09:05:00.000
1
1.2
true
41,117,150
0
0
0
1
I have an issue which may have two possible approaches to getting a solution, im open to either. I use a 3rd party application to download data daily into pandas dataframes, which I then write into a local postgres database. The dataframes are large, but since the database is local I simply use df.to_sql and it completes in a matter of seconds. The problem is that now I have moved the database to a remote linux server (VPS). The same to_sql now takes over an hour. I have tried various values for chunksize but that doesn't help much. This wouldn't be an issue if I could simply install the 3rd party app on that remote server, but the server OS does not use a GUI. Is there a way to run that 3rd party app on the server even though it requires a GUI? (note: it is a Windows app so I use wine to run it on my local linux machine and would presumably need to do that on the server as well). If there is no way to run that app which requires a GUI on the VPS, then how should I go about writing these dataframes to the VPS from my local machine in a way that doesn't take over an hour? Im hoping there's some way to write the dataframes in smaller pieces or using something other than to_sql more suited to this. A really clunky, inelegant solution would be to write the dataframes to csv files, upload them to the server using ftp, then run a separate python script on the server to save the data to the db. I guess that would work but it's certainly not ideal.
Linux - Open chromium from a script that runs on startup
41,139,324
1
0
227
0
python,linux,bash,chromium
In your script that runs on startup try DISPLAY=:0 <command> & To clarify DISPLAY=:0 simply sets which monitor your window opens on with 0 representing the first monitor of the local machine.
0
1
0
0
2016-12-14T09:33:00.000
1
1.2
true
41,139,136
0
0
0
1
I have a bash script that I've defined to run in startup, which runs a python script that waits for a command from another process, and when it gets it, it should open a chromium window with a certain URL. When I run this script manually it works fine, but when the script runs from startup, I get an error (displayed in syslog): Gtk: Can't open display I guess that's because it's running in a startup mode so it doesn't actually have a display to "lean" on... I was wondering if there's any way to get this work, anyway? Thanks in advance
How to display list of running processes Python?
47,796,696
12
49
121,162
0
python,linux,centos
You could also setup a "watch" in a separate window to constantly monitor Python processes as you run a script: watch -n 1 "ps u -C python3". Particularly useful when developing with multiprocessing.
0
1
0
0
2016-12-14T19:48:00.000
4
1
false
41,150,975
0
0
0
1
How to display list of running processes Python with full name and active status? I tried this command: pgrep -lf python
Removing old python packages on macos
41,184,494
0
0
107
0
python-2.7,homebrew,macos-sierra
Go to this folder and delete the packages that you want to delete C:\Python27\Scripts Thank you
0
1
0
0
2016-12-15T21:17:00.000
1
0
false
41,173,467
0
0
0
1
I have discovered that I have some old python packages installed in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python and want to remove these. Seems impossible to remove them however, tried "sudo rm -rf" etc but get permission errors. I general I have a working "homebrew" installation and need to get rid of the packages. How to go about it?
How do I copy files from windows system to any other remote server from python script?
41,180,235
1
2
5,431
0
python,windows
Server python -m http.server this will create a http server on port 8000 client python -c "import urllib; urllib.urlretrieve('http://x.x.x.x:8000/filename', 'filename')" where x.x.x.x is your server ip, filename is what you want to download
0
1
0
0
2016-12-16T05:03:00.000
3
0.066568
false
41,177,567
0
0
0
1
I don't want to use external modules like paramiko or fabric. Is there any python built in module through which we can transfer files from windows. I know for linux scp command is there like this is there any command for windows ?
Google Cloud Dataflow consume external source
41,229,251
4
0
614
0
python,etl,google-cloud-dataflow
Yes, this can absolutely be done. Right now, it's a little klutzy at the beginning, but upcoming work on a new primitive called SplittableDoFn should make this pattern much easier in the future. Start by using Create to make a dummy PCollection with a single element. Process that PCollection with a DoFn that downloads the file, reads out the subfiles, and emits those. [Optional] At this point, you'll likely want work to proceed in parallel. To allow the system to easily parallelize, you'll want to do a semantically unnecessary GroupByKey followed by a ParDo to 'undo' the grouping. This materializes these filenames into temporary storage, allowing the system to have different workers process each element. Process each subfile by reading its contents and emit into PCollections. If you want different file contents to be processed differently, use Partition to sort them into different PCollections. Do the relevant processing.
0
1
1
0
2016-12-18T19:55:00.000
1
1.2
true
41,212,272
0
0
0
1
So I am having a bit of a issue with the concepts behind Dataflow. Especially regarding the way the pipelines are supposed to be structured. I am trying to consume an external API that delivers an index XML file with links to separate XML files. Once I have the contents of all the XML files I need to split those up into separate PCollections so additional PTransforms can be done. It is hard to wrap my head around the fact that the first xml file needs to be downloaded and read, before the product XML's can be downloaded and read. As the documentation states that a pipeline starts with a Source and ends with a Sink. So my questions are: Is Dataflow even the right tool for this kind of task? Is a custom Source meant to incorporate this whole process, or is it supposed to be done in separate steps/pipelines? Is it ok to handle this in a pipeline and let another pipeline read the files? How would a high-level overview of this process look like? Things to note: I am using the Python SDK for this, but that probably isn't really relevant as this is more a architectural problem.
I am having issues adding Python folder to my path
41,230,657
0
0
106
0
macos,python-2.7,amazon-elastic-beanstalk
The tilde character wasn't being expanded within the double-quoted string. If you had tried to execute "~/Library/Python/2.7/bin/eb" --version in your second example it wouldn't have worked either. You could have set your path using something like export PATH="/Users/peter/Library/Python/2.7/bin:$PATH", or potentially export PATH=~/"Library/Python/2.7/bin:$PATH" (notice the tilde is outside the double-quotes.) I'd prefer the former, however.
0
1
0
1
2016-12-19T17:31:00.000
1
1.2
true
41,227,982
0
0
0
1
I am trying to add the Python 2.7 bin folder to my path in order to run elastic beanstalk. Here is some output from my Terminal: ➜ ~ echo $PATH ~/Library/Python/2.7/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin ➜ ~ ~/Library/Python/2.7/bin/eb --version EB CLI 3.8.9 (Python 2.7.1) ➜ ~ eb --version zsh: command not found: eb And here is my export statement in .zshrc: export PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin" export PATH="~/Library/Python/2.7/bin:$PATH" Can anyone tell me what's wrong? EB seems to be installed fine, and the path variable seems to point to it.
Accessing Google App Engine Python App code in production
41,268,471
1
5
1,805
0
google-app-engine,google-app-engine-python
If you are using the standard environment, the answer is no, you can't really inspect or see the code directly. You've mentioned looking at it via Stackdriver Debugger, which is one way to see a representation of it. It sounds like if you have a reason to be looking at the code, then someone in your organization should grant you the appropriate level of access to your source code management system. I'd imagine if you're deployment practices are mature, then they'd likely branch the code to map to your deployed versions and you could inspect in detail locally.
0
1
0
0
2016-12-21T00:05:00.000
2
0.099668
false
41,253,187
0
0
1
1
(Background: I am new to Google App Engine, familiar with other cloud providers' services) I am looking for access/view similar to shell access to production node. With a Python/Django based Google App Engine App, I would like to view the code in production. One view I could find is the StackDriver 'Debug' view. However, apparently the code shown in the Debug view doesn't reflect the updated production code (based on what is showing on the production site, for example, the text on the home page different). Does Google App Engine allow me to ssh into the VM where the application/code is running? If not, how can check the code that's running in production? Thanks.
How to add passenv to tox.ini without editing the file but by running tox in virtualenv shell nature script in Jenkins behind proxy (python)
41,262,440
2
2
1,397
0
python,unit-testing,jenkins,environment-variables,tox
I thought about a workaround: Create a build step in Jenkins job, that will execute bash script, that will open the tox.ini find line [testenv] and input one line below passenv = HTTP_PROXY HTTPS_PROXY. That would solve the problem. I am working on this right now but anyway if You guys know a better solution please let me know. cheers Ok so this is the solution: Add a build step Execute shell Input this: sed -i.bak '/\[testenv\]/a passenv = HTTP_PROXY HTTPS_PROXY' tox.ini This will update the tox.ini file (input the desired passenv line under [testenv] and save changes). And create a tox.ini.bak backup file with the original data before sed's change.
0
1
0
1
2016-12-21T09:30:00.000
1
1.2
true
41,259,308
0
0
0
1
I am trying to run python unit tests in jenkins using tox's virtualenv. I am behind a proxy so I need to pass HTTP_PROXY and HTTPS_PROXY to tox, else it has problems with downloading stuff. I found out that I can edit tox.ini and add passenv=HTTP_PROXY HTTPS_PROXY under [testenv], and than using the Create/Update Text File Plugin I can override the tox.ini(as a build step) whenever Jenkins job fetches the original file from repository. This way I can manually copy content of tox.ini from workspace, add the passenv= line below [testenv] and update the file with the plugin mentioned above. But this is not the proper solution. I don't want to edit the tox.ini file this way, because the file is constantly updated. Using this solution would force me to update the tox.ini content inside the plugin everytime it is changed on the git repository and I want the process of running unit tests to be fully automated. And no, I can't edit the original file on git repository. So is there a way that I can pass the passenv = HTTP_PROXY HTTPS_PROXY in the Shell nature command? This is how my command in Virtualenv Builder looks like: pip install -r requirements.txt -r test-requirements.txt pip install tox tox --skip-missing-interpreter module/tests/ I want to do something like this: tox --skip-missing-interpreter --[testenv]passenv=HTTP_PROXY HTTPS_PROXY module/tests How to solve this? NOTE: I think there might be a solution with using the {posargs}, but I see that there is a line in the original tox.ini containing that posargs already: python setup.py testr --testr-args='{posargs}' help...
"Python-Eggs is writable by group/others" Error in Windows 7
41,288,887
0
0
400
0
python,trac,bitnami
I found out that there was another service running on the 8080 port that I had setup trac on and that was causing the trouble. The error in the logs was not pointing to that as being the issue however.
0
1
0
1
2016-12-22T16:08:00.000
1
1.2
true
41,287,312
0
0
0
1
I installed trac using BitNami the other day and after restarting my computer I'm not able to get it running as a service today. I see in the error log this error [Fri Dec 02 08:52:40.565865 2016] [:error] [pid 4052:tid 968] C:\Bitnami\trac-1.0.13-0\python\lib\site-packages\setuptools-7.0-py2.7.egg\pkg_resources.py:1045: UserWarning: C:\WINDOWS\system32\config\systemprofile\AppData\Roaming\Python-Eggs is writable by group/others and vulnerable to attack when used with get_resource_filename. Consider a more secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE environment variable). Everyone's suggestion is to move the folder path PYTHON_EGG_CACHE to the C:\egg folder or to suppress the warning at the command line. I've already set the PYTHON_EGG_CACHE for the system, I set it in trac's setenv.bat file, and in the trac.wsgi file but it's not picking up on the changes when I try to start the service. Alternately I can't change the permissions on the folder in Roaming using chmod like in Linux, and I can't remove any more permissions on the folder in Roaming (myself, Administrators, System) as Corporate IT doesn't allow for Administrators to be removed and this isn't an unreasonable policy.
Job hangs forever with no logs
43,789,865
0
0
88
0
python,google-cloud-dataflow
As determined by thylong and jkff: The extra_package was binary-incompatible with Dataflow's packages. The requirements.txt in the root directory and the one in the extra_package were different, causing the exec.go in DataFlow container failing again and again. To fix, we recreated the venv with the same frozen dependencies.
0
1
0
0
2016-12-22T17:45:00.000
1
1.2
true
41,289,031
0
0
0
1
With the Python SDK, the job seems to hang forever (I have to kill it manually at some point) if I use the extra_package option to use a custom ParDo. Here is a job id for example : 2016-12-22_09_26_08-4077318648651073003 No explicit logs or errors are thrown... I noticed that It seems related to the extra_package option because if I use this option without actually triggering the ParDo (code commented), it doesn't work either. The initial Bq query with a simple output schema and no transform steps works. Did it happen to someone ? P.S : I'm using the DataFlow 0.4.3 version. I tested inside a venv and it seems to work with a DirectPipelineRunner
How to run python script in windows backround?
41,376,619
-1
2
4,219
0
python,windows,python-2.7,python-3.x,background
try to spin up an AWS instance and run it on a more reliable server. Or you can look into hadoop to process the code across multiple fail-safe servers
0
1
0
0
2016-12-29T07:42:00.000
3
-0.066568
false
41,375,247
0
0
0
1
I have a python script that running on windows server 2008 on cmd line. I don't need any interact during script running. By the way the script is running during about a week. So if the server disconnects my connection for some reason, my script stops and I have to start over and over again. It is huge trouble for me and I don't know how to solve this problem. Here is my question. How to run a python script in backround on windows server even user disconnect from the server? Thanks in advance for your help.
How to solve [Errno 11] Resource temporarily unavailable using uwsgi + nginx
44,000,160
0
0
1,657
0
python,django,uwsgi
You can increase 'listen' value in uwsgi configure file. The default value is 100 which is too small.
0
1
0
0
2016-12-29T09:46:00.000
2
0
false
41,377,059
0
0
1
1
I am using uwsgi with this configuration : net.core.somaxconn = 1024 net.core.netdev_max_backlog=1000 I got resource temporarily unavailable issue. How to resolve this issue? df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.8G 2.1G 5.6G 28% / devtmpfs 1.9G 12K 1.9G 1% /dev tmpfs 1.9G 16K 1.9G 1% /dev/shm
Flask with mod_wsgi - Cannot call my modules
41,394,185
0
0
1,008
0
python,flask,undefined,global,mod-wsgi
No main function call with mod_wsgi was the right answer. I do not implemented my required modules in the wsgi file, but on top of the flask app.
0
1
0
0
2016-12-29T14:32:00.000
1
0
false
41,381,705
0
0
1
1
I have changed my application running with flask and python2.7 from a standalone solution to flask with apache and mod_wsgi. My Flask app (app.py) includes some classes which are in the directory below my app dir (../). Here is my app.wsgi: #!/usr/bin/python import sys import logging logging.basicConfig(stream=sys.stderr) sys.stdout = sys.stderr project_home = '/opt/appdir/Application/myapp' project_web = '/opt/appdir/Application/myapp/web' if project_home not in sys.path: sys.path = [project_home] + sys.path if project_web not in sys.path: sys.path = [project_web] + sys.path from app import app application = app Before my configuration to mod_wsgi my main call in the app.py looks like that: # Main if __name__ == '__main__' : from os import sys, path sys.path.append(path.dirname(path.dirname(path.abspath(__file__)))) from logger import Logger from main import Main from configReader import ConfigReader print "Calling flask" from threadhandler import ThreadHandler ca = ConfigReader() app.run(host="0.0.0.0", threaded=True) I was perfectly able to load my classes in the directory below. After running the app with mod_wsgi I get the following error: global name \'Main\' is not defined So how do I have to change my app that this here would work: @app.route("/") def test(): main = Main("test") return main.responseMessage()
Eric IDE: How do I change the shell from python3 to python2?
41,398,603
0
1
1,575
0
python,python-2.7,shell,python-3.x,ide
Usually python2 interpreter is opened with the python command, and the python3 interpreter is opened with the python3 command. On linux, you may want to put #!/usr/bin/env python at the top of your code.
0
1
0
0
2016-12-30T14:53:00.000
2
0
false
41,398,166
1
0
0
1
Hey I'm just starting out with Eric6. Is it possible to change the shell to use python 2.* instead of python3? can't find anything related to that in the preferences? thanks
Python: How to write to a file while it is open in the OS
41,494,452
1
3
1,049
0
python,file,python-3.x,io
The best solution I've found is to make a copy of the file and then open it if you wish to view the contents of the file while it is being written to. It is easy to make a copy of a file programmatically if you wish to automate the process. If one wishes to implement a feature where the user can see the file as it is updated in real time, it is best to use communicate the data to a receive through a separate method, possibly by sockets or simply to stdout.
0
1
0
0
2016-12-31T03:04:00.000
1
1.2
true
41,405,062
0
0
0
1
I am writing a utility in Python that may require a user to open a file while the Python program is writing to it. The file is open as so in the Python program: CSV = open(test.csv, "a") When the Python program is running and the user double clicks test.csv in the OS, the Python program halts with the following error: PermissionError: [Errno 13] Permission denied: 'test.csv' I understand why this happens. I am wondering if there is something I can do so that the Python program can still write to the file while the file (a read-only copy perhaps) is open. I've noticed in the past that Perl programs, for example, still write to a file while it is open, so I know that this is programmatically possible. Can this be done in Python?
MBCS to UTF-8: How to encode in Python
61,595,144
1
0
6,389
0
windows,python-3.x,python-3.5,mbcs
Just change the encode to 'latin-1' (encoding='latin-1') Using pure Python: open(..., encoding = 'latin-1') Using Pandas: pd.read_csv(..., encoding='latin-1')
0
1
0
0
2017-01-02T07:03:00.000
2
0.099668
false
41,422,606
0
0
0
1
I am trying to create a duplicate file finder for Windows. My program works well in Linux. But it writes NUL characters to the log file in Windows. This is due to the MBCS default file system encoding of Windows, while the file system encoding in Linux is UTF-8. How can I convert MBCS to UTF-8 to avoid this error?
Jenkins showing fopen: No such file or directory, but file exists
41,439,647
0
0
697
0
python,file,jenkins,path
maybe you can use the $WORKSPACE var to have a full path for the fopen command.
0
1
0
0
2017-01-02T12:25:00.000
1
0
false
41,426,805
0
0
0
1
I'm trying to open a file from a python script which am running from jenkins. Both the files (which am trying to open and the python script) are in same location. But when am running the script, am getting error, fopen: No such file or directory am running export PATH="/file path:$PATH" in jenkins before running my script. But still am getting the fopen error. Am able to run the script from terminal
Freezing Python 3.6
41,429,495
2
1
636
0
pyinstaller,cx-freeze,python-3.6
The bytecode format changed for Python 3.6 but I just pushed a change to cx_Freeze that adds support for it. You can compile it yourself or wait for the next release -- which should be sometime this week.
0
1
0
0
2017-01-02T14:07:00.000
1
1.2
true
41,428,357
1
0
0
1
To utilize the inherent UTF-8 support for windows console, I wanted to freeze my script in python 3.6, but I'm unable to find any. Am I missing something, or none of the freezing modules updated for 3.6 yet? Otherwise I'll just keep a 3.5.2 frozen version and a 3.6 script version for computers with English consoles. Thanks.
Install pip on Mac for Python3 with Python2 already installed
41,472,744
1
0
984
0
python,python-2.7,python-3.x
You'll have to specify the Python 3 version of easy_install. The easiest way to do this is to give its full path on the command line. It should be in the executable directory of the Python 3 installation you did (i.e. the same directory as the Python 3 interpreter itself). You should not remove the system-installed Python 2 in an attempt to get easy_install to refer to Python 3, because the operating system relies on that version of Python being installed.
0
1
0
0
2017-01-04T20:29:00.000
1
0.197375
false
41,472,689
1
0
0
1
I have installed python3 on Mac and I am trying to install pip. While installing pip with command sudo easy_install pip it installs the pip for python 2.x which by default comes with Mac. Is there any way I can install pip for python3? Also, is it necessary to keep the older version of python installed as well?
Troubleshooting API timeout from Django+Celery in Docker Container
41,668,121
0
1
472
0
python,django,docker,containers,celery
You can shell into the running container and check things out. Is the celery process still running, etc... docker exec -ti my-container-name /bin/bash If you are using django, for example, you could go to your django directory and do manage.py shell and start poking around there. I have a similar setup where I run multiple web services using django/celery/celerybeat/nginx/... However, as a rule I run one process per container (kind of exception is django and gunicorn run in same container). I then share things by using --volumes-from. For example, the gunicorn app writes to a .sock file, and the container has its own nginx config; the nginx container does a --volumes-from the django container to get this info. That way, I can use a stock nginx container for all of my web services. Another handy thing for debugging is to log to stdout and use docker's log driver (splunk, logstash, etc.) for production, but have it log to the container when debugging. That way you can get a lot of information from 'docker logs' when you've got it under test. One of the great things about docker is you can take the exact code that is failing in production and run it under the microscope to debug it.
0
1
0
0
2017-01-05T12:33:00.000
1
0
false
41,485,251
0
0
1
1
I have a micro-services architecture of let say 9 services, each one running in its own container. The services use a mix of technologies, but mainly Django, Celery (with a Redis Queue), a shared PostgreSQL database (in its own container), and some more specific services/libraries. The micro-services talk to each other through REST API. The problem is that, sometimes in a random way, some containers API doesn't respond anymore and get stuck. When I issue a curl request on their interface I get a timeout. At that moment, all the other containers answer well. There is two stucking containers. What I noticed is that both of the blocking containers use: Django django-rest-framework Celery django-celery An embedded Redis as a Celery broker An access to a PostgreSQL DB that stands in another container I can't figure out how to troubleshoot the problem since no relevant information is visible in the Services or Docker logs. The problem is that these API's are stuck only at random moments. To make it work again, I need to stop the blocking container, and start it again. I was wondering if it could be a python GIL problem, but I don't know how to check this hypothesis... Any idea about how to troubleshot this?
Temporary failure in name resolution -wget in linux
41,500,665
0
1
5,012
0
python,linux,api,wget
This is tricky not knowing with which options you are calling wget and no log output, but since it seems to be a dns issue I would explicitly pass the --dns-servers=your.most.reliable.server to wget. If it persists I would also pass --append-output=logfile and examine logfile for further clues.
0
1
1
0
2017-01-06T06:49:00.000
2
0
false
41,500,455
0
0
1
2
Im running python code solution (automation) in linux As part of the test im calling different api (rest) and connecting to my sql db. I'm running the solution 24/7 The soultion does Call api with wget Every 1 min samples the db with query for 60 min max Call api again with wget Every 1 min samples dc for 10 mins max. This scenario runs 24/7 Problem is that after 1 hr/ 2hr (inconsistency-can happen after 45 mins for instance) the solution exit with error Temporary failutre in name resolution. It can happen even after 2 perfect cycle as I described above. After this failure I'm trying to call with wget tens of times and ends with the same error. After some time it covered by it self. Want to mention that when it fails with wget on linux, Im able to call the api via POSTMAN via windows with no problem. The api calls are for our system (located in aws) and im using dns of our elb.. What could be the oroblem for this inconsistency? Thanks
Temporary failure in name resolution -wget in linux
62,781,058
0
1
5,012
0
python,linux,api,wget
You can ignore the fail: wget http:/host/download 2>/dev/null
0
1
1
0
2017-01-06T06:49:00.000
2
0
false
41,500,455
0
0
1
2
Im running python code solution (automation) in linux As part of the test im calling different api (rest) and connecting to my sql db. I'm running the solution 24/7 The soultion does Call api with wget Every 1 min samples the db with query for 60 min max Call api again with wget Every 1 min samples dc for 10 mins max. This scenario runs 24/7 Problem is that after 1 hr/ 2hr (inconsistency-can happen after 45 mins for instance) the solution exit with error Temporary failutre in name resolution. It can happen even after 2 perfect cycle as I described above. After this failure I'm trying to call with wget tens of times and ends with the same error. After some time it covered by it self. Want to mention that when it fails with wget on linux, Im able to call the api via POSTMAN via windows with no problem. The api calls are for our system (located in aws) and im using dns of our elb.. What could be the oroblem for this inconsistency? Thanks
Virtualenv uses wrong python, even though it is first in $PATH
57,632,537
0
30
18,833
0
python,linux,virtualenv,virtualenvwrapper
I'm currently having the same problem. Virtualenv was created in Windows, now I'm trying to run it from WSL. In virtualenv I renamed python.exe to python3.exe(as I have only python3 command in WSL). In $PATH my virtualenv folder is first, there is no alias for python. I receive which python3 /usr/bin/python3. In /usr/bin/python3 there is symlink `python3 -> python3.6. I suppose it doesn't matter for order resolution.
0
1
0
0
2017-01-07T17:25:00.000
6
0
false
41,524,320
1
0
0
2
I had a problem where python was not finding modules installed by pip while in the virtualenv. I have narrowed it down, and found that when I call python when my virtualenv in activated, it still reaches out to /usr/bin/python instead of /home/liam/dev/.virtualenvs/noots/bin/python. When I use which python in the virtualenv I get: /home/liam/dev/.virtualenvs/noots/bin/python When I look up my $PATH variable in the virtualenv I get: bash: /home/liam/dev/.virtualenvs/noots/bin:/home/liam/bin:/home/liam/.local/bin:/home/liam/bin:/home/liam/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin: No such file or directory and yet when I actually run python it goes to /usr/bin/python To make things more confusing to me, if I run python3.5 it grabs python3.5 from the correct directory (i.e. /home/liam/dev/.virtualenvs/noots/bin/python3.5) I have not touched /home/liam/dev/.virtualenvs/noots/bin/ in anyway. python and python3.5 are still both linked to python3 in that directory. Traversing to /home/liam/dev/.virtualenvs/noots/bin/ and running ./python, ./python3 or ./python3.5 all work normally. I am using virtualenvwrapper if that makes a difference, however the problem seemed to occur recently, long after install virtualenv and virtualenvwrapper
Virtualenv uses wrong python, even though it is first in $PATH
54,101,050
1
30
18,833
0
python,linux,virtualenv,virtualenvwrapper
On Cygwin, I still have a problem even after I created symlink to point /usr/bin/python to F:\Python27\python.exe. Here, after source env/Scripts/activate, which python is still /usr/bin/python. After a long time, I figured out a solution. Instead of using virtualenv env, you have to use virtualenv -p F:\Python27\python.exe env even though you have created a symlink.
0
1
0
0
2017-01-07T17:25:00.000
6
0.033321
false
41,524,320
1
0
0
2
I had a problem where python was not finding modules installed by pip while in the virtualenv. I have narrowed it down, and found that when I call python when my virtualenv in activated, it still reaches out to /usr/bin/python instead of /home/liam/dev/.virtualenvs/noots/bin/python. When I use which python in the virtualenv I get: /home/liam/dev/.virtualenvs/noots/bin/python When I look up my $PATH variable in the virtualenv I get: bash: /home/liam/dev/.virtualenvs/noots/bin:/home/liam/bin:/home/liam/.local/bin:/home/liam/bin:/home/liam/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin: No such file or directory and yet when I actually run python it goes to /usr/bin/python To make things more confusing to me, if I run python3.5 it grabs python3.5 from the correct directory (i.e. /home/liam/dev/.virtualenvs/noots/bin/python3.5) I have not touched /home/liam/dev/.virtualenvs/noots/bin/ in anyway. python and python3.5 are still both linked to python3 in that directory. Traversing to /home/liam/dev/.virtualenvs/noots/bin/ and running ./python, ./python3 or ./python3.5 all work normally. I am using virtualenvwrapper if that makes a difference, however the problem seemed to occur recently, long after install virtualenv and virtualenvwrapper
Accessing Mainframe datasets using FTP form Python
59,443,599
0
0
1,617
0
python,mainframe,ftplib
ftplib ccommand ftp.cwd("'CY01'") works fine. Have been using it for over a year now.
0
1
0
0
2017-01-08T18:29:00.000
1
0
false
41,536,257
0
0
0
1
I want to download a dataset from mainframe using Python ftplib. I am able to login to mainframe and I get default working directory as "CY$$." I want to change the working directory to "CY01." I tried using ftp.cwd('CY01.') but it changes the directory to "CY$$.CY01." instead of just "CY01." While using command prompt I use below command to successfully change working directory: CD 'CY01.' (a '.' at end of directory name is IBM command to change default working directory and not append it to defualt directory) I also tried ftp.sendcmd("CD 'CY01.'") but it gives error "500 unknown command CD" Can someone please help with changing the defualt working directory? Thanks in advance.
python3 won't run from mac shell script
41,555,296
2
5
7,111
0
python,macos,shell,automator
Since you have the shebang line, you can do ./my_script.py and it should run with Python 3.
0
1
0
0
2017-01-09T19:12:00.000
4
0.099668
false
41,555,224
0
0
0
3
I am trying to use Automator on macOS 10.12 to launch a Python 3 script. The script works just fine when I run it from the terminal with the command: python3 my_script.py. Automator has a "Run Shell Script" function that uses the /bin/bash shell. The shell will run scripts with the command: python my_script.py, but this only seems to work for scripts written in Python 2.7. My script starts with #!/usr/bin/env python3, which I thought would direct the shell to the correct python interpreter, but that doesn't seem to be the case. As a workaround, I can get the script to run if I insert the full path to the python interpreter: /Library/Frameworks/Python.framework/Versions/3.5/bin/python3, but I see this as suboptimal because the commands might not work if/when I update to Python 3.6. Is there a better way to direct the /bin/bash shell to run Python3 scripts?
python3 won't run from mac shell script
54,734,617
1
5
7,111
0
python,macos,shell,automator
You can install Python 3 via Homebrew with brew install python3 and use #!/usr/local/bin/python3 as your shebang. Not a perfect solution but still better than using the full path of the interpreter.
0
1
0
0
2017-01-09T19:12:00.000
4
0.049958
false
41,555,224
0
0
0
3
I am trying to use Automator on macOS 10.12 to launch a Python 3 script. The script works just fine when I run it from the terminal with the command: python3 my_script.py. Automator has a "Run Shell Script" function that uses the /bin/bash shell. The shell will run scripts with the command: python my_script.py, but this only seems to work for scripts written in Python 2.7. My script starts with #!/usr/bin/env python3, which I thought would direct the shell to the correct python interpreter, but that doesn't seem to be the case. As a workaround, I can get the script to run if I insert the full path to the python interpreter: /Library/Frameworks/Python.framework/Versions/3.5/bin/python3, but I see this as suboptimal because the commands might not work if/when I update to Python 3.6. Is there a better way to direct the /bin/bash shell to run Python3 scripts?
python3 won't run from mac shell script
54,741,833
-1
5
7,111
0
python,macos,shell,automator
If python refers to Python 2 then that*s what you should expect. Use python3 in the command line, or defer to the script itself to define its interpreter. In some more detail, make sure the file's first line contains a valid shebang (you seem to have this sorted); but the shebang doesn't affect what interpreter will be used if you explicitly say python script.py. Instead, make the file executable, and run it with ./script.py. Actually, you can use env on the command line, too. env python3 script.py should work at the prompt, too.
0
1
0
0
2017-01-09T19:12:00.000
4
-0.049958
false
41,555,224
0
0
0
3
I am trying to use Automator on macOS 10.12 to launch a Python 3 script. The script works just fine when I run it from the terminal with the command: python3 my_script.py. Automator has a "Run Shell Script" function that uses the /bin/bash shell. The shell will run scripts with the command: python my_script.py, but this only seems to work for scripts written in Python 2.7. My script starts with #!/usr/bin/env python3, which I thought would direct the shell to the correct python interpreter, but that doesn't seem to be the case. As a workaround, I can get the script to run if I insert the full path to the python interpreter: /Library/Frameworks/Python.framework/Versions/3.5/bin/python3, but I see this as suboptimal because the commands might not work if/when I update to Python 3.6. Is there a better way to direct the /bin/bash shell to run Python3 scripts?
Airflow "This DAG isnt available in the webserver DagBag object "
64,042,837
3
51
25,022
0
python,airflow,workflow
This error can be misleading. If hitting refresh button or restarting airflow webserver doesn't fix this issue, check the DAG (python script) for errors. Running airflow list_dags can display the DAG errors (in addition to listing out the dags) or even try running/testing your dag as a normal python script. After fixing the error, this indicator should go away.
0
1
0
0
2017-01-10T03:21:00.000
5
0.119427
false
41,560,614
0
0
0
2
when I put a new DAG python script in the dags folder, I can view a new entry of DAG in the DAG UI but it was not enabled automatically. On top of that, it seems does not loaded properly as well. I can only click on the Refresh button few times on the right side of the list and toggle the on/off button on the left side of the list to be able to schedule the DAG. These are manual process as I need to trigger something even though the DAG Script was put inside the dag folder. Anyone can help me on this ? Did I missed something ? Or this is a correct behavior in airflow ? By the way, as mentioned in the post title, there is an indicator with this message "This DAG isn't available in the webserver DagBag object. It shows up in this list because the scheduler marked it as active in the metdata database" tagged with the DAG title before i trigger all this manual process.
Airflow "This DAG isnt available in the webserver DagBag object "
51,391,238
16
51
25,022
0
python,airflow,workflow
Restart the airflow webserver solves my issue.
0
1
0
0
2017-01-10T03:21:00.000
5
1
false
41,560,614
0
0
0
2
when I put a new DAG python script in the dags folder, I can view a new entry of DAG in the DAG UI but it was not enabled automatically. On top of that, it seems does not loaded properly as well. I can only click on the Refresh button few times on the right side of the list and toggle the on/off button on the left side of the list to be able to schedule the DAG. These are manual process as I need to trigger something even though the DAG Script was put inside the dag folder. Anyone can help me on this ? Did I missed something ? Or this is a correct behavior in airflow ? By the way, as mentioned in the post title, there is an indicator with this message "This DAG isn't available in the webserver DagBag object. It shows up in this list because the scheduler marked it as active in the metdata database" tagged with the DAG title before i trigger all this manual process.
Manipulating the Terminal Using a Python Script
41,615,003
1
0
379
0
macos,python-3.x,terminal,subprocess
Thanks for the comments guys but I managed to figure it out. In the end I used a combination of subprocess.Popen() and os.chdir() and it seems to work using Jupyter Notebook.
0
1
0
0
2017-01-10T12:01:00.000
1
0.197375
false
41,568,395
0
0
0
1
I have recently started using a program which has command line interfaces accessed through the Mac Terminal. I am trying to automate the process whereby a series of commands are passed through the terminal using Python. So far I have found a way to open the Terminal using the subprocess.Popen command but how do I then "write" in the terminal once it's open ? For example what I am looking to do is; 1. Open the Terminal App. 2. Select a directory in the App. 3. Run a command. In this instance the file I wish to run is called "RunUX" and what I want to type is "./RunUX ..." followed by command line arguments. I'm fairly new to Python and programming and appreciate all help !! Thanks
Running Virtualenv with a custom distlib?
41,748,115
0
0
42
0
python,dependencies,virtualenv
Surely the easiest way is simply to modify your Python environment to search another directory where it will find your modified distlib before it picks it up from the stdlib? The classic way to do this is by setting your PYTHONPATH environment variable. No changes required to your Python installation!
0
1
0
0
2017-01-12T00:54:00.000
2
0
false
41,603,416
1
0
0
1
I want to do some development on Python's distlib, and in the process run the code via virtualenv which has distlib as a dependency. That is, not run the process inside a virtualenv, but run virtualenv's code using a custom dependency. What are the steps I need to go through to achieve this? It seems to me that normal package management (pip) is not possible here.
Run mode not there (IDLE Python 3.6)
41,644,807
0
0
3,646
0
osx-mountain-lion,python-idle,python-3.6
I am not exactly sure what you are asking, and whether it has anything to do with OSX, but I can explain IDLE. IDLE has two types of main window: a single Shell and multiple Editor windows. Shell simulates python running in the interactive REPL mode that you get when you enter 'python' (or 'python3') in a console or terminal window. (The latter depends on the OS.) You enter statements at the >>> prompt. A single-line statement is run when you hit Enter (or Return). A multi-line statement is run when you hit Enter twice. This is the same as in interactive Python. Editor windows let you enter a multi-statement program. You run the programs by selecting Run and Run module from the menu or by hitting the shortcut key, which by default is F5 (at least on Windows and Linux). This runs the program much the same as if you enter python -i myprogram.py in a console. Program output and input goes to and is received from the Shell window. When the program ends, Python enters interactive mode and prints an interactive prompt (>>>). One can then interact with the objects created by the program. You are correct that Run does not appear on the menu bar of the Shell. It is not needed as one runs a statement with the Enter key.
0
1
0
0
2017-01-13T12:27:00.000
1
0
false
41,634,658
1
0
0
1
Probably a very simple question. I just thought, after someone suggested it here, of trying (and installing) Python 3.6 on a Mac - I've been happily using 2.7 since now. I've never used the IDLE before having done everything via the command line + ATOM to write the program. I see that 'normally' you should be able to write your program in the shell and then run it in the RUN window. However, I don't see a RUN mode in window, just the possibility of using, which you are anyhow, the shell window. I hope that makes sense! Is this normal, or have I missed something? p.s. I'm using OS X 10.8, if that's of any importance.
How to run python script at startup
41,662,958
0
2
6,891
0
python,linux,centos
There is no intrinsic reason why Python should be different from any other scripting language here. Here is someone else using python in init.d: blog.scphillips.com/posts/2013/07/… In fact, that deals with a lot that I don't deal with here, so I recommend just following that post.
0
1
0
1
2017-01-15T15:27:00.000
3
0
false
41,662,821
1
0
0
1
I'm trying to make a Python script run as a service. It need to work and run automatically after a reboot. I have tried to copy it inside the init.d folder, But without any luck. Can anyone help?(if it demands a cronjob, i haven't configured one before, so i would be glad if you could write how to do it) (Running Centos)
When using qsub to submit jobs, how can I include my locally installed python packages?
41,753,582
1
3
2,181
0
python,cluster-computing,pbs,qsub,supercomputers
If you are using pbs professional then try to export PYTHONPATH in your environment and then submit job using "-V" option with qsub. This will make qsub take all of your environment variables and export it for the job. Else, try setting it using option "-v" (notice small v) and then put your environment variable key/value pair with that option like qsub -v HOME=/home/user job.sh
0
1
0
0
2017-01-17T04:52:00.000
2
0.099668
false
41,689,297
1
1
0
1
I have an account on a supercomputing cluster where I've installed some packages using e.g. "pip install --user keras". When using qsub to submit jobs to the queue, I try to make sure the system can see my local packages by setting "export PYTHONPATH=$PYTHONPATH:[$HOME]/.local/lib/python2.7/site-packages/keras" in the script. However, the resulting log file still complains that there is no package called keras. How can I make sure the system finds my packages?
allure command line from python
41,712,644
1
1
824
0
python,allure
Since the Allure CLI script calls a java application makes it a Python to Java problem. There are a few solutions like Py4J that can help you with that. Keep in mind that most solutions rely on the Java app already running inside the secondary application before being called from Python.
0
1
0
1
2017-01-17T05:52:00.000
1
0.197375
false
41,689,872
0
0
0
1
Is there a way to call allureCLI from Python? I would like to use python instead than shell scripting to run multiple reports. I could use Popen but I am having so many issues with it, that I would rather avoid it unless there is no other way around
Can I pipeline HTTP 1.1 requests with tornado?
41,704,300
1
2
301
0
python,http,tornado
No, Tornado does not support HTTP/1.1 pipelining. It won't start serving the second request until the response to the first request has been written.
0
1
0
0
2017-01-17T15:40:00.000
1
1.2
true
41,701,274
0
0
0
1
As you may know HTTP/1.1 can allow you leave the socket open between HTTP requests leveraging the famous Keep-Alive connection. But, what less people exploit is the feature of just launch a burst of multiple sequential HTTP/1.1 requests without wait for the response in the middle time, Then the responses should return to you the same order paying the latency time just one time. (This consumption pattern is encouraged in Redis clients for example). I know this pattern has been improved in HTTP/2 with the multiplexing feature but my concern right now is if I can use that pipelining pattern with the tornado library exploiting its async features, or may be other library capable?
Equivalent property `num.consumer.fetchers` for the new kafka consumer
41,732,601
2
1
822
0
apache-kafka,kafka-consumer-api,kafka-python
The new consumer is single-threaded (excluding the background heartbeat thread), so no equivalent config is offered. By the way, 'num.consumer.fetchers' does not specify the number of fetcher threads as the doc says. It actually controls the possible maximum number of fetcher threads that Kafka can create.
0
1
0
0
2017-01-19T01:24:00.000
1
0.379949
false
41,732,242
0
0
0
1
In old consumer configs of Kafka, there is a property num.consumer.fetchers in order to configure the number fetcher threads used to fetch data. In the new consumer configs of Kafka, is there any property with this same function? And if not, how is the new consumer working on that?
How to create exe files from Python 3.6 script?
42,111,487
0
1
2,057
0
exe,python-3.6
I have successfully used cx_freeze 5.0.1 with Python 3.6. Did you tried with older version or specific setup that failed ?
0
1
0
0
2017-01-19T21:02:00.000
1
0
false
41,751,647
1
0
0
1
I want to learn if is there any available wheel for Python 3.6 to create executable files. I know pyinstall cx_freeze and py2exe options. However, they are available for Python3.4 or 3.5 for the most uptaded. Is there any way to create .exe from Python 3.6 script?
Could not locate a valid MSVC version
41,793,169
1
0
521
0
python,c++,visual-studio,visual-c++
cl.exe and similar visual studio commands are not in PATH. This means that you cannot execute them in the familiar manner (except if you add them to PATH) using CMD. You'll have to open the Visual Studio 2015 Command Prompt to be able to access cl.exe and similar commands. Then, inside the VS 2015 command prompt, you can execute the get-deps.cmd script.
0
1
0
0
2017-01-22T15:59:00.000
1
1.2
true
41,793,059
1
0
0
1
I am trying to compile a project from command prompt to open from visual studio. The project needed CMake, Python and Visual Studio 2015 to run, i have downloaded and installed all of those. I am trying to run a .cmd file "get-deps.cmd" file but it is unable to locate the valid MSVC version. Can someone help. Below is the screen sample. D:\pT1\polarisdeps>get-deps.cmd c:\opt\polarisdeps_vs2015 BASEDIR=c:\opt\polarisdeps_vs2015 1 file(s) copied. Could not locate a valid MSVC version.
Amazon device farm - wheel file from macosx platform not supported
42,273,559
0
0
290
0
python-2.7,opencv,numpy,aws-device-farm,python-appium
(numpy-1.12.0-cp27-cp27m-manylinux1_x86_64.whl) is numpy wheel for ubuntu. But still Amazon device farm throws error while configuring tests with this wheel. Basically, Device farm is validating if the .whl file has prefix -none-any.whl Just renaming the file to numpy-1.12.0-cp27-none-any.whl works in device farm. Note: This renamed file is non-universal python wheel. There might be few things which are not implemented in non-universal python wheel. This may cause somethings to break. So, test to ensure all your dependencies are working fine before using this.
0
1
0
0
2017-01-24T01:05:00.000
2
1.2
true
41,818,382
0
1
0
1
I am facing the following error on configuring Appium python test in AWS device farm: There was a problem processing your file. We found at least one wheel file wheelhouse/numpy-1.12.0-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl specified a platform that we do not support. Please unzip your test package and then open the wheelhouse directory, verify that names of wheel files end with -any.whl or -linux_x86_64.whl, and try again I require numpy and opencv-python packages to run my tests. How to get this issue fixed?
How do I run python script everyday at the same time with scheduler?
41,819,729
2
0
1,339
0
python,scheduler
You can use cron in linux. I also use cron to run my python script on my shared hosting server. And if you need to install python modules on your server maybe you also need to create a virtual environment using virtualenv. From my experience, if your script has clean exit than your python script will be killed or terminated properly, so you dont have to worry about python script not being killed and consuming your server resources :D
0
1
0
0
2017-01-24T01:56:00.000
2
1.2
true
41,818,773
1
0
0
1
This would be quite a general question though, what I want to know is: when scheduling a python script(ex. everyday 1:00 PM), I wonder if we have to let the script(or editor such as spyder) always 'open'. This means, do I have to let python always running? I have avoided to use scheduler library because people say that the python script is not killed, pending and waiting for the next task. What I have been doing as far was just using Windows Scheduler to run my scripts(crawlers) automatically everyday(people say this is called the 'batch process'..). But now I have to do these jobs on the server side, not in my local any more. Therefore, how can I run my python scripts just the same as the Windows Scheduler, with using the python scheduler library?
AWS python Lambda script that can access Oracle: Driver too big for 50MB limit
41,837,986
3
1
846
1
python,oracle,amazon-web-services,lambda,cx-oracle
If you can limit yourself to English error messages and a restricted set of character sets (which does include Unicode), then you can use the "Basic Lite" version of the instant client. For Linux x64 that is only 31 MB as a zip file.
0
1
0
1
2017-01-24T16:48:00.000
1
1.2
true
41,833,790
0
0
0
1
I must load the Oracle "instant client" libraries as part of my AWS lambda python deployment zip file. Problem is, many of the essential libraries (libclntsh.so.12.1 is 57MB libociei.so is 105MB) and Amazon only allows deployment zip files under 50MB. I tried: my script cannot connect to Oracle using cx_Oracle without that library in my local ORACLE_HOME and LD_LIBRARY_PATH. How can I get that library into Lambda considering their zip file size limitation? Linux zip just doesn't compress them enough.
Delete ALL Python 2.7 modules and Python from Mac
41,857,222
1
1
840
0
python,homebrew,uninstallation,removeall
To remove it there are 2 changes: remove the /Users/user/anaconda2 directory change your path to not use any /Users/user/anaconda2 directories. However I suggest you download Anaconda again and use environments rather than your root folder for everything. Use conda to install packages when possible (most of the time really) and use conda-environments on a per project basis to install packages (Instead of cluttering up your main environment). This way if you have this problem again you can delete the conda environment and all will be well.
0
1
0
0
2017-01-25T16:28:00.000
1
1.2
true
41,856,693
1
0
0
1
I had Python 2.7 for few months on my Mac and after installing one module - every other module got corrupted. Tried several hours of different ways to repair but did not work. Virtual-env also does now work now. I would like to remove ALL Python modules from my Mac along with Python and reinstall it with Brew (or other recommended tool). Packages are here: /Users/user/anaconda2/lib/python2.7/site-packages/ How do I do that? Should I remove this whole folder above or what is the proper way? (after reinstalling Python with just brew - it did not remove this folder and therefore same problem show up).
How can I run a simple python script hosted in the cloud on a specific schedule?
50,078,814
0
0
549
0
python,amazon-web-services,heroku,cron
Update: AWS does now support Python 3.6. Just select Python 3.6 from the runtime environments when configuring.
0
1
0
0
2017-01-25T16:49:00.000
3
0
false
41,857,126
0
0
1
1
Say I have a file "main.py" and I just want it to run at 10 minute intervals, but not on my computer. The only external libraries the file uses are mysql.connector and pip requests. Things I've tried: PythonAnywhere - free tier is too limiting (need to connect to external DB) AWS Lambda - Only supports up to Python 2.7, converted my code but still had issues Google Cloud Platform + Heroku - can only find tutorials covering deploying applications, I think these could do what I'm looking for but I can't figure out how. Thanks!
Running python script written in windows on mac
41,870,988
0
0
2,469
0
python,windows,macos,compatibility
You should use #!/usr/bin/env python as your first line in the script. It will be applied when you make the script executable and run it like ./script.py
0
1
0
0
2017-01-26T09:55:00.000
2
0
false
41,870,827
1
0
0
2
Iv'e written simple python script in the windows version. written in python 2.7, code compatible to 3.4 runs as script with #! /usr/bin/python Will it run as is on mac? Would like to know this before i distribute it to mac users and don't have a mac machine to test it.
Running python script written in windows on mac
41,871,000
0
0
2,469
0
python,windows,macos,compatibility
Short answer: It might run. Long answer: OS compatibility is a tricky issue. When writing code, make sure that it is portable as much as possible. Most of the basic operations in python are portable between OSes. When it comes to file reading, writing, enconding handling etc. stuff might go horribly wrong. Use the provided packages (e.g. import os) to do platform dependent stuff. In general, there is no way around a test. In many cases, code that runs on one system might not on another depending on hardware configuration etc. p.p. (I think of multithreading, pyopenCL and the like)
0
1
0
0
2017-01-26T09:55:00.000
2
0
false
41,870,827
1
0
0
2
Iv'e written simple python script in the windows version. written in python 2.7, code compatible to 3.4 runs as script with #! /usr/bin/python Will it run as is on mac? Would like to know this before i distribute it to mac users and don't have a mac machine to test it.
Give specific input files to mapper and reducer hadoop
41,911,541
0
0
63
0
python,hadoop
The only way you can do it is if files B and C are very small so that you can put them into the distcache and fetch them in all your Job. There is no partitioner Job in Hadoop. Partitioners run as part of map jobs, so it's the every mapper that has to read all 3 files A,B and C. The same applies to the reducer part. If B and C files are very large then you have to examine you data-flow and combine A,B,C in separate jobs. Can't explain how do it unless you share more details about your processing
0
1
0
0
2017-01-28T11:19:00.000
1
1.2
true
41,909,158
0
0
0
1
Say I have 3 input files A, B, C. I want that the mapper only gets records from A the partitioner gets input from both the mapper and files B and C the reducer gets input from the mapper (which has been directed by the partitioner) and file C. Is this possible to do in Hadoop? P.S. - I am using Python and Hadoop Streaming
Will Python programs created on a Raspberry Pi running Raspbian work on a Windows 8.1 machine?
41,926,309
0
0
34
0
python,windows,raspberry-pi3
Yes! Python code is mostly platform independent. Only some specific libs must be compiled in the Maschine. These should be installed using pip (if needed). More info in Google.
0
1
0
1
2017-01-29T21:43:00.000
3
0
false
41,926,293
1
0
0
2
I would like to use my Raspberry Pi for some programming. (never done it before, I want to get into Python.) If I can transfer my programs yo my Windows 8.1 computer and run them there also, that would be perfect. Can I do that? Thanks!
Will Python programs created on a Raspberry Pi running Raspbian work on a Windows 8.1 machine?
41,927,152
0
0
34
0
python,windows,raspberry-pi3
Short answer: mostly yes, but it depends. Obviously, the Raspberry Pi specific libraries for controlling its peripherals won't work on ms-windows. Your Pi is probably running a Linux distribution that has package management and comes with a functioning toolchain. That means that installing (python) packages and libraries will be a breeze. Tools like pip and setup.py scripts will mostly Just Work. That is not necessarily the case on ms-windows. Installing python libraries that contain extensions (compiled code) or require external shared libraries is a frustrating epxerience for technical reasons pertaining to the microsoft toolchain. On that OS it is generally easier to use a python distribution like Anaconda that has its own package manager, and comes with packages for most popular libraries. Furthermore, if you look into the documentation for Python's standard library you will see that sometimes a function is only available on UNIX or only on ms-windows. And due to the nature of how ms-windows creates new processes, there are some gotchas when you are using the multiprocessing module. It would be a good idea to use the same Python version on both platforms. Currently that would be preferably 3.6 or 3.5.
0
1
0
1
2017-01-29T21:43:00.000
3
0
false
41,926,293
1
0
0
2
I would like to use my Raspberry Pi for some programming. (never done it before, I want to get into Python.) If I can transfer my programs yo my Windows 8.1 computer and run them there also, that would be perfect. Can I do that? Thanks!
Repeated task execution using the distributed Dask scheduler
41,965,766
3
5
864
0
python,dask
Correct, if a task is allocated to one worker and another worker becomes free it may choose to steal excess tasks from its peers. There is a chance that it will steal a task that has just started to run, in which case the task will run twice. The clean way to handle this problem is to ensure that your tasks are idempotent, that they return the same result even if run twice. This might mean handling your database error within your task. This is one of those policies that are great for data intensive computing workloads but terrible for data engineering workloads. It's tricky to design a system that satisfies both needs simultaneously.
0
1
0
0
2017-01-31T18:48:00.000
1
1.2
true
41,965,253
0
1
0
1
I'm using the Dask distributed scheduler, running a scheduler and 5 workers locally. I submit a list of delayed() tasks to compute(). When the number of tasks is say 20 (a number >> than the number of workers) and each task takes say at least 15 secs, the scheduler starts rerunning some of the tasks (or executes them in parallel more than once). This is a problem since the tasks modify a SQL db and if they run again they end up raising an Exception (due to DB uniqueness constraints). I'm not setting pure=True anywhere (and I believe the default is False). Other than that, the Dask graph is trivial (no dependencies between the tasks). Still not sure if this is a feature or a bug in Dask. I have a gut feeling that this might be related to worker stealing...
Google Stackdriver does not show trace
45,255,813
0
2
264
0
python,google-app-engine,google-cloud-logging
Google has in the mean time update the cloud console and debugger, which now does contain full stack traces for Python.
0
1
0
0
2017-01-31T21:24:00.000
1
1.2
true
41,967,742
0
0
1
1
Previously when an error occurred in my application I could find a trace of the entire code to where it happened ( file, line number ). In the Google Cloud console. Right now I only receive a request ID and a timestamp, with no indication of a trace or line number in the code when in the 'logging' window in the Google Cloud Console. Selecting a 'log event' only shows some sort of JSON structure of a request, but not anything about the code or any helpful information what went wrong with the application. What option should be selected in the google cloud console to show a stack trace for Python App Engine applications?
Nginx non-responsive while celery is running
42,014,392
0
0
233
0
python,django,nginx,redis,wsgi
Figured this out after a few days. We were using a django app called django-health-check. It has a component called health_check_celery3 that was in the installed apps. This was having trouble loading while celery was running, and thus causing the whole app to stall. After removing it, celery runs as it should.
0
1
0
0
2017-01-31T23:47:00.000
2
0
false
41,969,597
0
0
1
1
I have a django app configured to run behind nginx using uWSGI. On a separate machine I am running celery, and pushing long running tasks from the webserver to the task machine. The majority of the task I/O is outbound http requests, which go on for an hour or more. The task broker is redis. When the tasks run for more than a minute or two, the webserver becomes unresponsive (503 errors). There are no errors raised anywhere within the python app. The tasks complete normally, after which the webserver continues handling requests. Has anyone experienced this before, and if so, how did you deal with it? Thanks