Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Anaconda 4.3, 64-bit (python 3.6), leaves incorrect truncated paths in windows Start menu | 42,047,557 | 2 | 3 | 1,261 | 0 | python-3.x,anaconda,jupyter-notebook | Looks like this was fixed in the newest build of anaconda (4.3.0 .1). Unfortunately looks like it requires uninstall and reinstall as the locations seems to have changed drastically (from some subsubsub folder off of AppData to something higher up, under user directory).
(But that might be the effect of testing 4.3.0.1 on a different machine.)
For example, ipython is now:
C:\Users\user_name\Anaconda3\python.exe C:\Users\user_name\Anaconda3\cwp.py C:\Users\user_name\Anaconda3 "C:/Users/user_name/Anaconda3/python.exe" "C:/Users/user_name/Anaconda3/Scripts/ipython-script.py"
Here is changelog for 4.3.0.1:
In this “micro” patch release, we fixed a problem with the Windows installers which was causing problems with Qt applications when the install prefix exceeds 30 characters. No new Anaconda meta-packages correspond to this release (only new Windows installers). | 0 | 1 | 0 | 0 | 2017-02-01T01:46:00.000 | 1 | 1.2 | true | 41,970,630 | 1 | 0 | 0 | 1 | After installing anaconda 4.3 64-bit (python 3.6) on windows, and choosing "install for current user only" and "add to path":
I noticed that the anaconda program shortcuts don't work on my start menu--they are cut off at the end. Does anyone know how the correct entries should read? (or instead, how to repair the links?) thanks.
UPDATE: I reproduced the problem on two other machines, Windows 10 (x64) and windows 8.1 (x64), that were "clean" (neither one had a prior installation of python).
This is what they are after a fresh install (under "Target" in "Properties" in the "Shortcut" tab for each shortcut item):
JUPYTER NOTEBOOK:
C:\Users\user_name\AppData\Local\Continuum\Anaconda3\python.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/python.exe" "C:/Users/user_name/AppData/Loc
JUPYTER QTCONSOLE:
C:\Users\user_name\AppData\Local\Continuum\Anaconda3\pythonw.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/pythonw.exe" "C:/Users/user_name/AppData/L
SPYDER:
C:\Users\user_name\AppData\Local\Continuum\Anaconda3\pythonw.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/pythonw.exe" "C:/Users/user_name/AppData/L
RESET SPYDER:
C:\Users\user_name\AppData\Local\Continuum\Anaconda3\python.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/python.exe" "C:/Users/user_name/AppData/Loc
NAVIGATOR:
C:\Users\user_name\AppData\Local\Continuum\Anaconda3\pythonw.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/pythonw.exe" "C:/Users/user_name/AppData/L
IPYTHON:
C:\Users\user_name\AppData\Local\Continuum\Anaconda3\python.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/python.exe" "C:/Users/user_name/AppData/Loc |
why os.system('cmd.exe') in pycharm does not open a new console | 41,976,142 | 0 | 0 | 658 | 0 | windows,pycharm,python-idle | Check os.environ['PATH'] and os.system("echo $PATH"), they should be the same. | 0 | 1 | 0 | 0 | 2017-02-01T09:20:00.000 | 1 | 0 | false | 41,975,993 | 1 | 0 | 0 | 1 | In my python script, there is os.system('cmd.exe').
The same script opens a new cmd console when executed with Python IDLE, but not when executed in PyCharm.
Any help on this? |
newbie installing aerospike client to both my python versions | 41,991,031 | 1 | 1 | 154 | 0 | aerospike,python-3.6 | I figured it out. I just needed to use pip3 instead of pip to install it to correct version of python (though I was only able to get it onto 3.5, not 3.6 for some reason). | 0 | 1 | 0 | 1 | 2017-02-01T20:47:00.000 | 3 | 0.066568 | false | 41,989,455 | 0 | 0 | 0 | 1 | I just followed the instructions on the site and installed aerospike (on linux mint). I'm able to import the aerospike python client module from python 2.7 but not from 3.6 (newly installed). I'm thinking that I need to add the directory to my "python path" perhaps??, but having difficulty understanding how this works. I want to be able to run aerospike and matplotlib in 3.6. |
Running python3 using a Run/Debug config with Vagrant: "Error running x.py Can't run remote python interpreter" | 41,991,612 | 0 | 0 | 691 | 0 | python,vagrant,pycharm | Turns out I was only changing the project python interpreter configuration to point to my running Vagrant machine, however, the Run/Debug Configuration wasn't set to use this project interpreter, but rather a different Vagrant machine which was currently down.
Fixed by editing the Run/Debug Configuration and changing the "Python interpreter" to "Project Default". | 0 | 1 | 0 | 0 | 2017-02-01T23:13:00.000 | 1 | 1.2 | true | 41,991,575 | 0 | 0 | 0 | 1 | I get the following PyCharm error when running my python3 interpreter on Vagrant:
Error running x.py Can't run remote python interpreter: The provider for this Vagrant-managed machine is reporting that it is not yet ready for SSH. Depending on your provider this can carry different meanings. Make sure your machine is created and running and try again. Additionally, check the output of vagrant status to verify that the machine is in the state that you expect. If you continue to get this error message, please view the documentation for the provider you're using.
I have no problem running the code from the terminal. I only have a problem when running through my Run/Debug Configuration.
Using PyCharm 2016.3.1 on Windows 10.
PyCharm
How do I run from my Run/Debug Configuration? |
/usr/bin/python vs /usr/local/bin/python | 41,992,148 | 0 | 7 | 6,843 | 0 | python,linux | 1) You should not modify the system's binaries yourself directly
2) If your $PATH variable doesn't contain /usr/local/bin, the naming of that secondary directory isn't really important. You can install / upgrade independently wherever you have installed your extra binaries.
3) For Python specifically, you could also just use conda / virtualenv invoked by your system's python to manage your versions & projects. | 0 | 1 | 0 | 0 | 2017-02-02T00:00:00.000 | 2 | 0 | false | 41,992,104 | 1 | 0 | 0 | 1 | On Linux, specifically Debian Jessie, should I use /usr/bin/python or should I install another copy in /usr/local/bin?
I understand that the former is the system version and that it can change when the operating system is updated. This would mean that I can update the version in the latter independently of the OS. As I am already using python 3, I don't see what significant practical difference that would make.
Are there other reasons to use a local version?
(I know there are ~42 SO questions about how to change between version, but I can't find any about why) |
Screen rotation in windows with python | 50,964,423 | 0 | 1 | 4,207 | 0 | python,windows,python-3.x,winapi,pywin32 | If you have the rotate shortcut active in windows (CTRL+ALT+ARROW KEY) you can use pyautogui.hotkey function. | 0 | 1 | 0 | 0 | 2017-02-02T16:23:00.000 | 4 | 0 | false | 42,007,272 | 0 | 0 | 0 | 1 | I am trying to write a python script to rotate screen in Windows.
I have clues of doing it with Win32api.
What are the other possibilities or commands to achieve so(Win32api included). |
Ubuntu Server Run Script at Login for Specific User | 42,010,021 | 0 | 0 | 65 | 0 | python,ubuntu | It was the path of the .py file being called inside jumpbox.py. I was referencing it only as the filename, without the full path, since it was in the same directory. os.system("python <full path>.py") made it work perfectly.
Thanks @Hannu | 0 | 1 | 0 | 1 | 2017-02-02T18:20:00.000 | 1 | 0 | false | 42,009,558 | 0 | 0 | 0 | 1 | I have a python program I want to run when a specific user logs into my Ubuntu Server. Previously, I tried to do this via the command useradd -m -s /var/jumpbox/jumpbox.py jumpbox. This ran the program, but it didn't work the same way it did when I call it via ./jumpbox.py from the /var/jumpbox directory. The problem is, this is a curses menu and when an option is selected, another .py file is called to run. Using the useradd method to run jumpbox.py, my menu was the part the worked, but it never called my other .py files when an option was selected. What is the best way to go about running my /var/jumpbox/jumpbox.py file is run when the jumpbox user (and only this user) logs into the server? |
How to make a package executable from the command line? | 42,035,060 | 1 | 5 | 4,192 | 0 | python,linux,chmod | Short answer is No.
When you make chmod +x mypackage you are doing nothing because mypackage is a directory and directories already has execute flag (or you will be unable to list their files). If you type: ls -l you will see.
Your options to run directly the whole package without installing it is the way you already mention: python -m mypackage, or make a shell script which will do that for you.
I see that your intentions are to execute just ./something and your application to start working without specifying python in front and also this to not be globally installed. The easyest way will be to put a shell script that will launch your package. | 0 | 1 | 0 | 0 | 2017-02-03T21:35:00.000 | 2 | 1.2 | true | 42,033,360 | 1 | 0 | 0 | 1 | I want to make a python package executable from the command line.
I know you can do chmod +x myfile.py where myfile.py starts with #!/usr/bin/env to make a single file executable using ./myfile.py. I also know you can do python -m mypackage to run a package including a __main__.py.
However, if I add the shebang line to the __main__.py of a package, run chmod +x mypackage, and try ./mypackage, I get the error -bash: ./mypackage: Is a directory.
Is it possible to run a package like this?
(To be clear, I'm not looking for something like py2exe to make it a standalone executable. I'm still expecting it to be interpreted, I just want to make the launch simpler) |
Ctrl-C for quitting Python in Powershell now not working | 58,658,397 | 0 | 14 | 30,551 | 0 | python,powershell,exit | In my case, I found out that right ctrl + c does the trick in anaconda3 powershell - so no remapping necessary - I'm on Windows 10. | 0 | 1 | 0 | 0 | 2017-02-04T10:24:00.000 | 8 | 0 | false | 42,039,231 | 1 | 0 | 0 | 2 | Python fails to quit when using Ctrl-C in Powershell/Command Prompt, and instead gives out a "KeyboardInterrupt" string.
Recently I've reinstalled Windows 10. Before the reinstall Ctrl-C quit python (3.5/2.7) fine, with no output.
Does anyone know why this has started happening? Whether it's just a simple setting?
The only difference I can think of is I'm now on python 3.6. Ctrl-D works in Bash on Ubuntu on Windows, and Ctrl-C works fine in an activated anaconda python2 environment for quitting python. |
Ctrl-C for quitting Python in Powershell now not working | 54,466,340 | 0 | 14 | 30,551 | 0 | python,powershell,exit | Hitting Esc button on the upper corner of the keyboard seems to work for me on Windows-7, inside Spyder with numpy running, for Python 3.+
It broke the infinite ...: on an erroneous syntax in the interactive script | 0 | 1 | 0 | 0 | 2017-02-04T10:24:00.000 | 8 | 0 | false | 42,039,231 | 1 | 0 | 0 | 2 | Python fails to quit when using Ctrl-C in Powershell/Command Prompt, and instead gives out a "KeyboardInterrupt" string.
Recently I've reinstalled Windows 10. Before the reinstall Ctrl-C quit python (3.5/2.7) fine, with no output.
Does anyone know why this has started happening? Whether it's just a simple setting?
The only difference I can think of is I'm now on python 3.6. Ctrl-D works in Bash on Ubuntu on Windows, and Ctrl-C works fine in an activated anaconda python2 environment for quitting python. |
FTP Jar file from share path on windows to IFS location in AS400? | 42,054,160 | 0 | 0 | 500 | 0 | shell,python-3.x,ftp,ibm-midrange | The CL command RUNRMTCMD can be used to invoke a command on a PC running a rexec() client. iSeries Access for Windows offers such a client, and there are others available. With the iSeries client, the output of the PC command is placed in a spool file on the AS/400, which should contain the results of the FTP session.
You can copy the spool file to a file using the CPYSPLF command and SNDDST it to yourself, but I am not sure the contents will be converted from EBCDIC to ASCII.
Check out Easy400.net for the MMAIL programs developed by Giovanni Perotti. This package includes an EMAILSPL command to email a spool file. I believe you will need to pay $50 for the download.
I think you are on the right track, but the are a lot of details. | 0 | 1 | 0 | 1 | 2017-02-04T19:19:00.000 | 3 | 0 | false | 42,044,560 | 0 | 0 | 0 | 1 | I am looking for an approach / design through which i want to automate the process of FTP from windows location to IFS location present on AS400 environment when ever there is a new file added to windows path.
Below is the approach I thought, Please refine it if needed.
We have an option WRKJOBSCDEthrough which we can run a CL program in a scheduled threshold of 1hr.
To write a CL program which invokes a script(pyton/shell) to talk to windows location(say X:drive having its IP as xx.xxx.xx.xx).
Shell script has to search for latest file in the location X:drive and FTP that jar(of size 5mb max) to IFS location(say /usr/dta/ydrive) on AS400 machine.
Thus, CL program we invoked in STEP2 has to mail to me using SNDDSTthe list of all the jars ftp'd by the scheduler job that runs every 1 hr in STEP1.
All I am new to CL programming/RPGLE . Please help me with some
learning stuff and also design of such concepts. |
How to run a local python script from django project hosted on aws? | 42,053,909 | 0 | 0 | 59 | 0 | python,django | Well, you simply need to find a way for the two of them to communicate without opening huge security holes.
My suggestion would be a message queue (rabbit MQ, amazon SQS). The aws application writes jobs to the message queue and a local script runs the worker which is waiting for messages to be written to the queue for it to pick up. | 0 | 1 | 0 | 0 | 2017-02-05T15:35:00.000 | 1 | 1.2 | true | 42,053,855 | 0 | 0 | 1 | 1 | I have a requirement to run a local python script which takes arguements and will run on local windows computer from python code hosted on aws. |
How to access several ports of a Docker container inside the same container? | 42,114,253 | 0 | 0 | 42 | 0 | python-3.x,networking,nginx,docker | It's not a good thing to put a lot of applications into one container, normally you should split that with one container per app, it's the way it should be used.
But if you absolutly want to use many apps into one container you can use proxy or write a dockerfile that will open your ports itself. | 0 | 1 | 0 | 0 | 2017-02-07T22:54:00.000 | 2 | 0 | false | 42,101,552 | 0 | 0 | 1 | 1 | I am trying to put an application that listens to several ports inside a Docker image.
At the moment, I have one docker image with a Nginx server with the front-end and a Python app: the Nginx runs on the port 27019 and the app runs on 5984.
The index.html file listens to localhost:5984 but it seems like it only listens to it outside the container (on the localhost of my computer).
The only way I can make it work at the moment is by using the -p option twice in the docker run:
docker run -p 27019:27019 -p 5984:5984 app-test.
Doing so, I generate two localhost ports on my computer. If I don't put the -p 5984:5984 it doesn't work.
I plan on using more ports for the application, so I'd like to avoid adding -p xxx:xxx for each new port.
How can I make an application inside the container (in this case the index.html at 27019) listens to another port inside the same container, without having to publish both of them? Can it be generalized to more than two ports? The final objective would be to have a complete application running on a single port on a server/computer, while listening to several ports inside Docker container(s). |
How to change Default Python for Salt in CentOS 7? | 42,263,855 | 1 | 1 | 1,084 | 0 | python,python-2.7,salt,salt-stack,salt-cloud | The salt packages are built using the system python and system site-packages directory. If something doesn't work right, file a bug with salt. You should avoid overwriting the stock python, as that will result in a broken system in many ways. | 0 | 1 | 0 | 1 | 2017-02-08T01:58:00.000 | 1 | 0.197375 | false | 42,103,374 | 0 | 0 | 0 | 1 | I am trying to setup a salt-master/salt-cloud on Centos 7. The issue that I am having is that I need Python 2.7.13 to use salt-cloud to clone vm in vmware vcenter (uses pyvmomi). CentOS comes with Python 2.7.5 which salt has a known issue with (SSL doesn't work).
I have tried to find a configuration file on the machine to change which python version it should use with no luck.
I see two possible fixes here,
somehow overwrite the python 2.7.5 with 2.7.13 so that it is the only python available.
OR
If possible change the python path salt uses.
Any Ideas on how to do either of these would be appreciated?
(Or another solution that I haven't mentioned above?) |
How to run Python Scripts on Mac Terminal using Docker with Tensorflow? | 44,015,407 | 2 | 1 | 2,693 | 0 | python,macos,docker,machine-learning,tensorflow | Lets assume you have a script my_script.py located at /Users/awesome_user/python_scripts/ on your mac
By default the tensorFlow image bash will locate you at /notebooks.
Run this command in your terminal: docker run --rm -it -v /Users/awesome_user/python_scripts/:/notebooks gcr.io/tensorflow/tensorflow bash
This will map your local mac folder /Users/awesome_user/python_scripts/ to the docker's local folder /notebooks
then just run from the bash python my_script.py. Also running ls should reveal your folder content | 0 | 1 | 0 | 0 | 2017-02-08T15:01:00.000 | 3 | 0.132549 | false | 42,116,597 | 0 | 0 | 0 | 1 | I just got a Mac and shifted from Windows and installed Tensorflow using Docker and everything is working fine, but I want to run a python script that I have from before. Is there any way to run a python script in docker on a Mac, using the terminal? |
PyCharm PRO for Mac GAE upload not working | 42,124,236 | 0 | 0 | 40 | 0 | python,macos,google-app-engine,pycharm | I did a clean install recently, and not using the AppEngineLauncher anymore - not sure it even ships with the newer SDK.
My GAE is located here:
/usr/local/google-cloud-sdk/platform/google_appengine
Looks like you might be using an older version of AppEngine SDK | 0 | 1 | 0 | 0 | 2017-02-08T18:09:00.000 | 1 | 0 | false | 42,120,541 | 0 | 0 | 1 | 1 | My colleague and I both have Macs, and we both have PyCharm Professional, same version (2016.3.2) and build (December 28, 2016). We use a repository to keep our project directories in sync, and they are currently identical. Under Preferences, we both have "Enable Google App Engine support" checked, and we both have the same directory shown as "SDK directory", with the same files in that directory.
When I choose menu option Tools > Google App Engine > Upload App Engine app..., the App Config Tool panel appears at the bottom of my PyCharm window. The first line is:
/usr/bin/python
/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/appcfg.py
update .
and the last line is:
appcfg.py >
I can also run that update command from a Terminal window.
Meanwhile, my colleague can also run the update command from a Terminal window. But when he runs menu option Tools > Google App Engine > Upload App Engine app..., the App Config Tool panel only shows:
appcfg.py >
We've researched this extensively and made many attempts at repair, no luck so far. Any help will be most appreciated. |
How do I use use python with sublime text 3? | 42,132,998 | 0 | 0 | 414 | 0 | python,path,sublimetext3 | You have to add Python to the Environmental Variables. | 0 | 1 | 0 | 0 | 2017-02-09T09:28:00.000 | 2 | 0 | false | 42,132,797 | 1 | 0 | 0 | 1 | I have installed Anaconda on Sublime Text 3 and this error pops up each time
whenever I try to build a file:
'python' is not recognized as an internal or external command,
operable program or batch file.
If it involves adding the python to the path file(on windows), could please explain exactly what I need to add.
I use Windows 7 Ultimate, Python 3.6.0 |
Exporting data from local standard environment and importing it in Datastore Emulator | 42,145,088 | 1 | 0 | 344 | 0 | google-app-engine,google-cloud-datastore,google-app-engine-python | In my tests, I found that the database files created by AppEngine Dev and Datastore Emulator are compatible. I was able to copy the local_db.bin from app-engine database to replace the same file in Datastore Emulator's data directory and was able to access the data. | 0 | 1 | 0 | 0 | 2017-02-09T14:29:00.000 | 1 | 0.197375 | false | 42,139,307 | 0 | 0 | 1 | 1 | We have two app engine apps, which read/save to the same datastore (that is, same project).
Datastore is actually the way they "transfer data" to each other.
One of the apps is running on standard environment, and the other is running in the flexible environment.
In the flexible environment, to run local tests in my machine, without using google datastore servers, I have to use the Datastore Emulator, which it's configured already.
What I would like now is to find a simple way to export data saved in the standard environment app (created using dev_appserver.py) and import it in the datastore emulator.
I would NOT like to push the data google servers and export from there, if that could be avoidable, instead exporting from the database that ran in my local machine.
Is there a feature/library which might help me with this task? |
Installing Pyomo on a Mac with Anaconda installed | 42,165,867 | 0 | 0 | 726 | 0 | macos,python-3.x,anaconda,pyomo | Pyomo is a Python package, so the way you use it is by importing it in Python scripts and executing those scripts with the Python interpreter that you installed Pyomo into.
If you want use the pyomo command to solve a model file (rather than creating a Pyomo solver object in your Python script and running it directly) you will have to add the bin location of your Anaconda script to your PATH. I do this on my Mac by adding a line like the following to ~/.bash_profile:
export PATH=/Users/gabe/<Anaconda-installation-directory>/bin:$PATH
This will add the location to the beginning of your PATH, causing the Anaconda Python to be executed by default from your terminal (rather than the default system Python). This is also the location that pip will install Pyomo related executables into (assuming you used the pip installed with Anaconda and not the pip associated with some other Python installation). | 0 | 1 | 0 | 0 | 2017-02-10T02:21:00.000 | 1 | 1.2 | true | 42,150,448 | 1 | 0 | 0 | 1 | I am new to Mac, having been well-versed in PCs for over 20 years. Unfortunately, the ease at which I can get "under the hood" with a PC is nigh on impossible for me to intuitively sort out in a Mac (ironic isn't it?). In any case, here is my situation:
I am looking to install a number of open-source analyst-centric tools on my new Mac, to include Python, R, and Pyomo. I am doing some home-testing to explore the viability of these tools for an enterprise solution on a work network. As such, I am looking at Anaconda Navigator as a potential one-stop shop for managing a variety of tools.
I have successfully installed Anaconda 4.3 with a Python 3.6 environment on the Mac, but I am running into trouble installing (or rather finding) Pyomo.
I attempted to do a "conda" install of Pyomo via the terminal shell, but got an error. I then attempted a "pip" install which apparently worked.
Unfortunately, I have no idea how to invoke Pyomo, either from the OS X interface or from Anaconda. This is partially due to my inexperience with the OS X system and how to navigate the file and/or PATH structure.
As I am attempting to evaluate Anaconda, how can I set up Pyomo through the Anaconda Navigator shell? I have attempted importing a new environment, but cannot find a specification file, again due to my inability to navigate the OS X file system.
All installations have been completing using default settings. |
How to update python functions in airflow without the need to restart airflow webserver | 42,405,900 | -1 | 5 | 6,293 | 0 | python,airflow | This has been an issue with the current version. What I usually do is to duplicate the DAG and change its name so it reflects in the web server. As soon as I finish developing I keep the last renamed and delete the old ones. | 0 | 1 | 0 | 0 | 2017-02-10T14:36:00.000 | 3 | -0.066568 | false | 42,161,928 | 1 | 0 | 0 | 1 | I'm learning to use airflow to schedule some python ETL processes. Each time I update my python code I have to restart the webserver and also rename the DAG before code changes are picked up by airflow. Is there anyway around this, especially so I dont have to be renaming my DAG each time I make changes? |
Purpose of opening and closing files python | 42,170,966 | 0 | 1 | 70 | 0 | python | Bash command is actually a small script or program, it's supposed to provide end to end operations.
For programming, you'll use more basic unit operations (open, close, read, write, rewind, seek) than a set of operations (copy, delete). It's true both for Python, C, etc.
If you go deeper into operating system coding, you will probably need operations to manipulate file representation details such as file handler and storage system.
It's different level of abstraction for handle problems of different granularity. | 0 | 1 | 0 | 0 | 2017-02-11T00:28:00.000 | 1 | 0 | false | 42,170,796 | 1 | 0 | 0 | 1 | What is the logic behind python requiring that a file be opened for write or append before being written to? Is there an advantage of doing this compared with something like bash which can just directly write to the file with something like: print 'Hello World' >> output.txt? |
How to uninstall Anaconda completely from macOS | 49,156,868 | 2 | 188 | 380,553 | 0 | python,macos,anaconda,uninstallation | This is one more place that anaconda had an entry that was breaking my python install after removing Anaconda. Hoping this helps someone else.
If you are using yarn, I found this entry in my .yarn.rc file in ~/"username"
python "/Users/someone/anaconda3/bin/python3"
removing this line fixed one last place needed for complete removal. I am not sure how that entry was added but it helped | 0 | 1 | 0 | 0 | 2017-02-11T23:59:00.000 | 12 | 0.033321 | false | 42,182,706 | 1 | 0 | 0 | 4 | How can I completely uninstall Anaconda from MacOS Sierra and revert back to the original Python? I have tried using conda-clean -yes but that doesn't work. I also remove the stuff in ~/.bash_profile but it still uses the Anaconda python and I can still run the conda command. |
How to uninstall Anaconda completely from macOS | 71,133,391 | 1 | 188 | 380,553 | 0 | python,macos,anaconda,uninstallation | None of these solutions worked for me. Turns out I had to remove all the hidden files that you can reveal with ls -a My .zshrc file had some anaconda references in it that needed to be deleted | 0 | 1 | 0 | 0 | 2017-02-11T23:59:00.000 | 12 | 0.016665 | false | 42,182,706 | 1 | 0 | 0 | 4 | How can I completely uninstall Anaconda from MacOS Sierra and revert back to the original Python? I have tried using conda-clean -yes but that doesn't work. I also remove the stuff in ~/.bash_profile but it still uses the Anaconda python and I can still run the conda command. |
How to uninstall Anaconda completely from macOS | 50,745,801 | 0 | 188 | 380,553 | 0 | python,macos,anaconda,uninstallation | Adding export PATH="/Users/<username>/anaconda/bin:$PATH" (or export PATH="/Users/<username>/anaconda3/bin:$PATH" if you have anaconda 3)
to my ~/.bash_profile file, fixed this issue for me. | 0 | 1 | 0 | 0 | 2017-02-11T23:59:00.000 | 12 | 0 | false | 42,182,706 | 1 | 0 | 0 | 4 | How can I completely uninstall Anaconda from MacOS Sierra and revert back to the original Python? I have tried using conda-clean -yes but that doesn't work. I also remove the stuff in ~/.bash_profile but it still uses the Anaconda python and I can still run the conda command. |
How to uninstall Anaconda completely from macOS | 51,944,028 | 2 | 188 | 380,553 | 0 | python,macos,anaconda,uninstallation | After performing the very helpful suggestions from both spicyramen & jkysam without immediate success, a simple restart of my Mac was needed to make the system recognize the changes. Hope this helps someone! | 0 | 1 | 0 | 0 | 2017-02-11T23:59:00.000 | 12 | 0.033321 | false | 42,182,706 | 1 | 0 | 0 | 4 | How can I completely uninstall Anaconda from MacOS Sierra and revert back to the original Python? I have tried using conda-clean -yes but that doesn't work. I also remove the stuff in ~/.bash_profile but it still uses the Anaconda python and I can still run the conda command. |
Pip does not work after upgrade to ubuntu-16.10 | 42,346,675 | 1 | 2 | 1,721 | 0 | python-2.7,python-3.x,ubuntu,pip | Upgrade your setuptools:
wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo python3
Generally sudo combined with pip is considered harmful, avoid this when your system is not already broken. | 0 | 1 | 0 | 0 | 2017-02-12T06:18:00.000 | 3 | 0.066568 | false | 42,184,792 | 1 | 0 | 0 | 2 | Running a command alongwith pip gives the following error. Even the command pip -V produces the following error.
I read that the error is due to setuptools version 31.0.0 and it should be lower than 28.0.0. But the version of my setuptools is 26.1.1 and it still gives the same error.
Traceback (most recent call last):
File "/usr/local/bin/pip", line 7, in
from pip import main
File "/usr/local/lib/python3.5/dist-packages/pip/__init__.py", line 26, in
from pip.utils import get_installed_distributions, get_prog
File "/usr/local/lib/python3.5/dist-packages/pip/utils/__init__.py", line 27, in
from pip._vendor import pkg_resources
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 3018, in
@_call_aside
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 3004, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 3046, in _initialize_master_working_set
dist.activate(replace=False)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2578, in activate
declare_namespace(pkg)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2152, in declare_namespace
_handle_ns(packageName, path_item)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2092, in _handle_ns
_rebuild_mod_path(path, packageName, module)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2121, in _rebuild_mod_path
orig_path.sort(key=position_in_sys_path)
AttributeError: '_NamespacePath' object has no attribute 'sort' |
Pip does not work after upgrade to ubuntu-16.10 | 43,832,299 | 2 | 2 | 1,721 | 0 | python-2.7,python-3.x,ubuntu,pip | The only solution I could find is reinstalling pip. Run these commands on your terminal
wget https://bootstrap.pypa.io/get-pip.py
sudo -H python get-pip.py --prefix=/usr/local/
However, this works only for pip, not pip3! | 0 | 1 | 0 | 0 | 2017-02-12T06:18:00.000 | 3 | 1.2 | true | 42,184,792 | 1 | 0 | 0 | 2 | Running a command alongwith pip gives the following error. Even the command pip -V produces the following error.
I read that the error is due to setuptools version 31.0.0 and it should be lower than 28.0.0. But the version of my setuptools is 26.1.1 and it still gives the same error.
Traceback (most recent call last):
File "/usr/local/bin/pip", line 7, in
from pip import main
File "/usr/local/lib/python3.5/dist-packages/pip/__init__.py", line 26, in
from pip.utils import get_installed_distributions, get_prog
File "/usr/local/lib/python3.5/dist-packages/pip/utils/__init__.py", line 27, in
from pip._vendor import pkg_resources
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 3018, in
@_call_aside
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 3004, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 3046, in _initialize_master_working_set
dist.activate(replace=False)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2578, in activate
declare_namespace(pkg)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2152, in declare_namespace
_handle_ns(packageName, path_item)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2092, in _handle_ns
_rebuild_mod_path(path, packageName, module)
File "/usr/local/lib/python3.5/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2121, in _rebuild_mod_path
orig_path.sort(key=position_in_sys_path)
AttributeError: '_NamespacePath' object has no attribute 'sort' |
How to set a 'system level' environment variable? | 42,213,924 | 1 | 1 | 834 | 0 | python,linux,bash,export,subprocess | This isn't a thing you can do. Your subprocess call creates a subshell and sets the env var there, but doesn't affect the current process, let alone the calling shell. | 0 | 1 | 0 | 0 | 2017-02-13T21:23:00.000 | 3 | 0.066568 | false | 42,213,820 | 0 | 0 | 0 | 2 | I'm trying to generate an encryption key for a file and then save it for use next time the script runs. I know that's not very secure, but it's just an interim solution for keeping a password out of a git repo.
subprocess.call('export KEY="password"', shell=True) returns 0 and does nothing.
Running export KEY="password" manually in my bash prompt works fine on Ubuntu. |
How to set a 'system level' environment variable? | 42,213,930 | 2 | 1 | 834 | 0 | python,linux,bash,export,subprocess | subprocess.call('export KEY="password"', shell=True)
creates a shell, sets your KEY and exits: accomplishes nothing.
Environment variables do not propagate to parent process, only to child processes. When you set the variable in your bash prompt, it is effective for all the subprocesses (but not outside the bash prompt, for a quick parallel)
The only way to make it using python would be to set the password using a master python script (using os.putenv("KEY","password") or os.environ["KEY"]="password") which calls sub-modules or processes. | 0 | 1 | 0 | 0 | 2017-02-13T21:23:00.000 | 3 | 0.132549 | false | 42,213,820 | 0 | 0 | 0 | 2 | I'm trying to generate an encryption key for a file and then save it for use next time the script runs. I know that's not very secure, but it's just an interim solution for keeping a password out of a git repo.
subprocess.call('export KEY="password"', shell=True) returns 0 and does nothing.
Running export KEY="password" manually in my bash prompt works fine on Ubuntu. |
jupyter does not show all folders in my home directory | 63,427,616 | -2 | 4 | 6,607 | 0 | python,jupyter-notebook | Open command promt,
pip
If you use pip, you can install it with:
pip install notebook
Congratulations, you have installed Jupyter Notebook! To run the notebook, run the following command at the Terminal (Mac/Linux) or Command Prompt (Windows):
jupyter notebook | 0 | 1 | 0 | 0 | 2017-02-13T22:43:00.000 | 2 | -0.197375 | false | 42,214,900 | 1 | 0 | 0 | 1 | When I run my jupyter notebook, the home folder displayed in jupyter is always my home directory even if I start my notebook from a different directory. Also not all folders in my home directory are displayed. I tried to change the access of the unshown folders by using chmod -R 0755 foldername, however the folders do not show when I run jupyter.
I want all the folders in my home directory to show.
I am using ubuntu. |
Installing python modules in production meteor app hosted with galaxy | 42,284,125 | 0 | 1 | 192 | 0 | python,node.js,meteor,meteor-galaxy | It really depends on how horrible you want to be :)
No matter what, you'll need a well-specified requirements.txt or setup.py. Once you can confirm your scripts can run on something other than a development machine, perhaps by using a virtualenv, you have a few options:
I would recommend hosting your Python scripts as their own independent app. This sounds horrible, but in reality, with Flask, you can basically make them executable over the Internet with very, very little IT. Indeed, Flask is supported as a first-class citizen in Google App Engine.
Alternatively, you can poke at what version of Linux the Meteor containers are running and ship a binary built with PyInstaller in your private directory. | 0 | 1 | 0 | 1 | 2017-02-14T02:02:00.000 | 1 | 0 | false | 42,216,640 | 0 | 0 | 1 | 1 | I have a meteor project that includes python scripts in our private folder of our project. We can easily run them from meteor using exec, we just don't know how to install python modules on our galaxy server that is hosting our app. It works fine running the scripts on our localhost since the modules are installed on our computers, but it appears galaxy doesn't offer a command line or anything to install these modules. We tried creating our own command line by calling exec commands on the meteor server, but it was unable to find any modules. For example when we tried to install pip, the server logged "Unable to find pip".
Basically we can run the python scripts, but since they rely on modules, galaxy throws errors and we aren't sure how to install those modules. Any ideas?
Thanks! |
Undefined symbol after Running get-pip on fresh Python source installation | 42,234,395 | 1 | 1 | 465 | 0 | python,c,linux,pip,centos6 | After digging a bit more, I found the problem.
The symbol was undefined in _io.so. I ldd this library and learned that it was pointing to an older libpython2.7.so (which is the library that happens to define the symbol in its new version). This was because I had the old /opt/python/lib in my LDD_LIBRARY_PATH:
linux-vdso.so.1 => (0x00007fffb68d5000)
libpython2.7.so.1.0 => /opt/python/lib/libpython2.7.so.1.0 (0x00007f4240492000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f424025f000)
libc.so.6 => /lib64/libc.so.6 (0x00007f423fecb000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f423fcc7000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007f423fac3000)
libm.so.6 => /lib64/libm.so.6 (0x00007f423f83f000)
/lib64/ld-linux-x86-64.so.2 (0x000000337b000000)
I fixed this and it solved the problem. | 0 | 1 | 0 | 1 | 2017-02-14T18:44:00.000 | 1 | 0.197375 | false | 42,233,903 | 1 | 0 | 0 | 1 | I've installed python 2.7.13 from sources according to their readme file on CentOS 6.6. (just following the configure/make procedure). I run these python from the command line and seems to work fine. However, as it doesn't come with pip and setuptools, I downloaded get-pip.py and tried to run it this way:
/share/apps/Python-2.7.13/bin/python2.7 get-pip.py
Then I get the following error:
Traceback (most recent call last):
File "get-pip.py", line 28, in <module>
import tempfile
File "/share/apps/Python-2.7.13/lib/python2.7/tempfile.py", line 32, in <module>
import io as _io
File "/share/apps/Python-2.7.13/lib/python2.7/io.py", line 51, in <module>
import _io
ImportError: /share/apps/Python-2.7.13/lib/python2.7/lib-dynload/_io.so: undefined symbol: _PyCodec_LookupTextEncoding
I tried the same with Python 2.7.12 with identical results.
However, if I run get-pip.py with a prebuilt python 2.7.12 release, it works fine.
EDIT: I checked the library /share/apps/Python-2.7.13/lib/python2.7/lib-dynload/_io.so with nm -g and the symbol seems to be there (I found U _PyCodec_LookupTextEncoding)
Any help will be greatly appreciated,
thanks in advance,
Bernabé |
How do I use protobuf on a Mac with Python? | 42,259,034 | 0 | 2 | 1,350 | 0 | python,macos,protocol-buffers | I was/am using PyCharm. The protobuf library doesn't automatically get linked to the PyCharm interpreter. If you run python script.py from the command line, there are no issues with missing modules. | 0 | 1 | 0 | 0 | 2017-02-14T21:27:00.000 | 1 | 1.2 | true | 42,236,545 | 0 | 0 | 0 | 1 | I installed the package and ran all of the correct commands. I did this for 2.6.1, 2.7, and 3.2. Between each I subsequently uninstalled the previous version. Within each version I went into the python folder and ran the python installation commands.
I ran brew install protobuf (and subsequently uninstalled it).
I ran sudo pip install protobuf (and subsequently uninstalled it).
The issue I am constantly getting is that the generated .py protobuf file calls imports from google.protobuf, but I am returned an error: ImportError: No module named google.protobuf
I then copy in the google folder (which you shouldn't have to do) and it stops returning that error, but the file and examples won't work. |
how to debug python fabric using pycharm | 42,261,986 | 1 | 2 | 1,096 | 0 | python,pycharm,fabric | I haven't used this setup on Windows, but on Linux/Mac, it's fairly straightforward:
Create a new Run Configuration in PyCharm for a Python script (when you click the "+" button, select the one labelled "Python")
The "Configuration" tab should be open.
For the "Script" field, enter the full path to fab.exe, like C:\Python27\.....\fab.exe or whatever it is.
For Script parameters, just try -l, to list the available commands. You'll tweak this later, and fill it in with whatever tasks you'd run from the command line, like "fab etc..."
For the "Working directory" field, you'll want to set that to the directory that contains your fabfile.
And it's about as easy as that, at least on *nix. Sorry that I don't have a Windows setup, but do let us know if you do have any issues with the setup described above. | 0 | 1 | 0 | 0 | 2017-02-15T11:51:00.000 | 3 | 0.066568 | false | 42,248,607 | 1 | 0 | 0 | 1 | There are some posts on SO and tell me to use fab-script.py as startup script for pycharm. It's exactly what I used before. Now when I upgrade fabric to latest version, fab-script disappeared, and only fab.exe left there. I tried a lot of other ways, but still failed to launch debugger in pycharm. |
Apple Swift is overriding Openstack swift package | 56,529,866 | 1 | 0 | 416 | 0 | swift,python-2.7 | For me, Apple Swift is under /usr/bin/swift and python-swiftclient is under /usr/bin/local/swift. Explicitly invoking it as /usr/bin/local/swift works. | 0 | 1 | 0 | 0 | 2017-02-16T03:28:00.000 | 2 | 0.099668 | false | 42,264,307 | 0 | 0 | 0 | 1 | I have installed OpenStack swift python client (pip install python-swiftclient). However /usr/bin has swift executable (which I can not remove as it is owned by root) and is overriding python swift.
Requirement already satisfied: python-swiftclient in /Library/Python/2.7/site-packages
Requirement already satisfied: requests>=1.1 in /Library/Python/2.7/site-packages (from python-swiftclient)
Requirement already satisfied: six>=1.5.2 in /Library/Python/2.7/site-packages/six-1.10.0-py2.7.egg (from python-swiftclient)
Requirement already satisfied: futures>=3.0; python_version == "2.7" or python_version == "2.6" in /Library/Python/2.7/site-packages (from python-swiftclient)
However, I am unable to find python swift anywhere. Please let me know how to resolve this.
Many Thanks
Chen |
Running cronjob every 5 minutes but stopped after first execution? | 42,271,741 | 0 | 0 | 227 | 0 | python,cron,crontab | Simple solution is, you can set some Bash env variable MONITORING=true and let your python script to check that variable using os.environ["MONITORING"]. If that variable is true then check if the server is up or down else don't check anything. Once server down is found, set that variable to false from script like os.environ["MONITORING"] = false. So it won't send emails until you set that env variable again true. | 0 | 1 | 0 | 1 | 2017-02-16T10:32:00.000 | 2 | 1.2 | true | 42,271,330 | 0 | 0 | 0 | 2 | I have python script that checks if the server is up or down, and if it's down it sends out an email along with few system logs.
What I want is to keep checking for the server every 5 minutes, so I put the cronjob as follows:
*/5 * * * * /python/uptime.sh
So whenever the server's down, it sends an email. But I want the script to stop executing (sending more emails) after the first one.
Can anyone help me out with how to do this?
Thanks. |
Running cronjob every 5 minutes but stopped after first execution? | 42,295,869 | 0 | 0 | 227 | 0 | python,cron,crontab | write an empty While True script that runs forever (ex: "mailtrigger.py")
run it with -nohup mailtrigger.py from shell in infinite loop
once the server is down check if mailtrigger.py is running, if its
not then terminate mailtrigger.py (kill process id)
your next iterations will not send mails since mailtrigger.py is not running. | 0 | 1 | 0 | 1 | 2017-02-16T10:32:00.000 | 2 | 0 | false | 42,271,330 | 0 | 0 | 0 | 2 | I have python script that checks if the server is up or down, and if it's down it sends out an email along with few system logs.
What I want is to keep checking for the server every 5 minutes, so I put the cronjob as follows:
*/5 * * * * /python/uptime.sh
So whenever the server's down, it sends an email. But I want the script to stop executing (sending more emails) after the first one.
Can anyone help me out with how to do this?
Thanks. |
How to install tkinter in python 3.6 in CentOS release 6.4 | 42,347,600 | 0 | 3 | 2,402 | 0 | tkinter,python-3.6 | If you want to install tkinter in order to use matplotlib you may try
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
It worked for me | 1 | 1 | 0 | 0 | 2017-02-17T10:12:00.000 | 1 | 0 | false | 42,295,171 | 0 | 0 | 0 | 1 | I have started to play with Python and I went directly to Python 3.6.
I have two Python environments now in my system: Python 2..6.6 and Python 3.6
Python 2.6.6 is under:
which python
/usr/bin/python
And Python 3.6 is under /opt/python3/bin
My problem is that if I try to import tkinter in Python 3.6 it does not work:
./python3.6
Python 3.6.0 (default, Feb 16 2017, 17:37:36)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
import tkinter
Traceback (most recent call last):
File "", line 1, in
File "/opt/python3/lib/python3.6/tkinter/init.py", line 36, in
import _tkinter # If this fails your Python may not be configured for Tk
ModuleNotFoundError: ****No module named '_tkinter'****
If I do in Python 2.6 it works:
python
Python 2.6.6 (r266:84292, Aug 18 2016, 15:13:37)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-17)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import Tkinter
PLEASE NOTE, I know that the module is lower case t in Python 3 so instead of import Tkinter, I am typing import tkinter.
My question is: How do I install tkinter in Python 3 in CentOS.
This I what have tried so far:
yum install python3-tk
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
* base: mirror.us.leaseweb.net
* extras: mirror.us.leaseweb.net
* updates: mirror.us.leaseweb.net
Setting up Install Process
No package python3-tk available.
Error: Nothing to do
How do I install in CentOS 6 the module tkinter and make Python 3 able to use it?
Thanks for any feedback. |
Kafka optimal retention and deletion policy | 42,328,553 | 17 | 11 | 19,180 | 0 | apache-kafka,kafka-consumer-api,kafka-producer-api,kafka-python | Apache Kafka uses Log data structure to manage its messages. Log data structure is basically an ordered set of Segments whereas a Segment is a collection of messages. Apache Kafka provides retention at Segment level instead of at Message level. Hence, Kafka keeps on removing Segments from its end as these violate retention policies.
Apache Kafka provides us with the following retention policies -
Time Based Retention
Under this policy, we configure the maximum time a Segment (hence messages) can live for. Once a Segment has spanned configured retention time, it is marked for deletion or compaction depending on configured cleanup policy. Default retention time for Segments is 7 days.
Here are the parameters (in decreasing order of priority) that you can set in your Kafka broker properties file:
Configures retention time in milliseconds
log.retention.ms=1680000
Used if log.retention.ms is not set
log.retention.minutes=1680
Used if log.retention.minutes is not set
log.retention.hours=168
Size based Retention
In this policy, we configure the maximum size of a Log data structure for a Topic partition. Once Log size reaches this size, it starts removing Segments from its end. This policy is not popular as this does not provide good visibility about message expiry. However it can come handy in a scenario where we need to control the size of a Log due to limited disk space.
Here are the parameters that you can set in your Kafka broker properties file:
Configures maximum size of a Log
log.retention.bytes=104857600
So according to your use case you should configure log.retention.bytes so that your disk should not get full. | 0 | 1 | 0 | 0 | 2017-02-18T04:35:00.000 | 1 | 1.2 | true | 42,311,100 | 0 | 0 | 0 | 1 | I am fairly new to kafka so forgive me if this question is trivial. I have a very simple setup for purposes of timing tests as follows:
Machine A -> writes to topic 1 (Broker) -> Machine B reads from topic 1
Machine B -> writes message just read to topic 2 (Broker) -> Machine A reads from topic 2
Now I am sending messages of roughly 1400 bytes in an infinite loop filling up the space on my small broker very quickly. I'm experimenting with setting different values for log.retention.ms, log.retention.bytes, log.segment.bytes and log.segment.delete.delay.ms. First I set all of the values to the minimum allowed, but it seemed this degraded performance, then I set them to the maximum my broker could take before being completely full, but again the performance degrades when a deletion occurs. Is there a best practice for setting these values to get the absolute minimum delay?
Thanks for the help! |
nltk for python 3.6 in windows64 | 43,109,101 | 5 | 3 | 21,144 | 0 | python-3.x,nltk | I had the same problem as you, but I accidentally found pip.exe in my python directory, so I navigated to said directory with CMD and ran the command pip install -U nltk and it worked. | 0 | 1 | 0 | 0 | 2017-02-18T10:09:00.000 | 6 | 0.16514 | false | 42,313,776 | 1 | 0 | 0 | 4 | I'm new to python, I'm using Windows 10 and have python36 and I basically have to use nltk for my project and i basically have two questions.
1 I heard pip is automatically downloaded for versions 3+ but when I type pip install nltk in command prompt im getting the following error even though i added its path "C:\Users\dheeraj\AppData\Local\Programs\Python\Python36\Scripts\pip36" in advanced settings and ya in above path i tried pip36 and pip in both cases result is same.
'pip' is not recognized as an internal or external command,"
2 In www.nltk.org I found nltk for mac, unix and windows32 but not for windows64 ,does that mean it doesnt support for 64bit or is there any way for me to install nltk. |
nltk for python 3.6 in windows64 | 43,700,474 | 3 | 3 | 21,144 | 0 | python-3.x,nltk | Run the Python interpreter and type the commands:
import nltk>>>
nltk.download()>>>
A new window should open, showing the NLTK Downloader. Click on the File menu and select Change Download Directory. For central installation, set this to C:\nltk_data (Windows), /usr/local/share/nltk_data(Mac), or /usr/share/nltk_data (Unix). Next, select the packages or collections you want to download.
If you did not install the data to one of the above central locations, you will need to set the NLTK_DATA environment variable to specify the location of the data. (On a Windows machine, right click on “My Computer” then select Properties > Advanced > Environment Variables > User Variables > New...)
Test that the data has been installed as follows. (This assumes you downloaded the Brown Corpus):
from nltk.corpus import brown>>>
brown.words()>>>
['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', ...]
Installing via a proxy web server
If your web connection uses a proxy server, you should specify the proxy address as follows. In the case of an authenticating proxy, specify a username and password. If the proxy is set to None then this function will attempt to detect the system proxy.
nltk.set_proxy('http://proxy.example.com:3128', ('USERNAME', 'PASSWORD'))>>>
>>> nltk.download()
Command line installation
The downloader will search for an existing nltk_data directory to install NLTK data. If one does not exist it will attempt to create one in a central location (when using an administrator account) or otherwise in the user’s filespace. If necessary, run the download command from an administrator account, or using sudo. The recommended system location is C:\nltk_data (Windows); /usr/local/share/nltk_data (Mac); and/usr/share/nltk_data `(Unix). You can use the -d flag to specify a different location (but if you do this, be sure to set the NLTK_DATA environment variable accordingly).
Run the command python -m nltk.downloader all. To ensure central installation, run the command sudo python -m nltk.downloader -d /usr/local/share/nltk_data all.
Windows: Use the “Run...” option on the Start menu. Windows Vista users need to first turn on this option, using Start -> Properties -> Customize to check the box to activate the “Run...” option. | 0 | 1 | 0 | 0 | 2017-02-18T10:09:00.000 | 6 | 0.099668 | false | 42,313,776 | 1 | 0 | 0 | 4 | I'm new to python, I'm using Windows 10 and have python36 and I basically have to use nltk for my project and i basically have two questions.
1 I heard pip is automatically downloaded for versions 3+ but when I type pip install nltk in command prompt im getting the following error even though i added its path "C:\Users\dheeraj\AppData\Local\Programs\Python\Python36\Scripts\pip36" in advanced settings and ya in above path i tried pip36 and pip in both cases result is same.
'pip' is not recognized as an internal or external command,"
2 In www.nltk.org I found nltk for mac, unix and windows32 but not for windows64 ,does that mean it doesnt support for 64bit or is there any way for me to install nltk. |
nltk for python 3.6 in windows64 | 48,652,887 | 1 | 3 | 21,144 | 0 | python-3.x,nltk | I will recommend you to use the Anaconda on Windows. Anaconda has nltk version for Python 64-bit. Now I'm using Python 3.6.4 64-bit and nltk.
Under python shell run:
import nltk
nltk.download()
then the downloader will be open in new window and you can download what you want. | 0 | 1 | 0 | 0 | 2017-02-18T10:09:00.000 | 6 | 0.033321 | false | 42,313,776 | 1 | 0 | 0 | 4 | I'm new to python, I'm using Windows 10 and have python36 and I basically have to use nltk for my project and i basically have two questions.
1 I heard pip is automatically downloaded for versions 3+ but when I type pip install nltk in command prompt im getting the following error even though i added its path "C:\Users\dheeraj\AppData\Local\Programs\Python\Python36\Scripts\pip36" in advanced settings and ya in above path i tried pip36 and pip in both cases result is same.
'pip' is not recognized as an internal or external command,"
2 In www.nltk.org I found nltk for mac, unix and windows32 but not for windows64 ,does that mean it doesnt support for 64bit or is there any way for me to install nltk. |
nltk for python 3.6 in windows64 | 46,702,612 | 3 | 3 | 21,144 | 0 | python-3.x,nltk | Directly Search for pip folder and navigate throught that path example:
C:\Users\PAVAN\Environments\my_env\Lib\site-packages\pip>
Run cmd
and then run the command
pip install -U nltk | 0 | 1 | 0 | 0 | 2017-02-18T10:09:00.000 | 6 | 0.099668 | false | 42,313,776 | 1 | 0 | 0 | 4 | I'm new to python, I'm using Windows 10 and have python36 and I basically have to use nltk for my project and i basically have two questions.
1 I heard pip is automatically downloaded for versions 3+ but when I type pip install nltk in command prompt im getting the following error even though i added its path "C:\Users\dheeraj\AppData\Local\Programs\Python\Python36\Scripts\pip36" in advanced settings and ya in above path i tried pip36 and pip in both cases result is same.
'pip' is not recognized as an internal or external command,"
2 In www.nltk.org I found nltk for mac, unix and windows32 but not for windows64 ,does that mean it doesnt support for 64bit or is there any way for me to install nltk. |
Not able to install scrapy in my windows 10 x64 machine | 45,332,173 | 1 | 1 | 1,285 | 0 | python,windows,scrapy,pypi | Use pip3 insttead of pip since you are using python3 | 0 | 1 | 0 | 0 | 2017-02-18T20:15:00.000 | 3 | 1.2 | true | 42,320,197 | 1 | 0 | 1 | 1 | I got pip install scrapy in cmd,
it said Collecting scrapy and after a few seconds I got the following error:
Command "c:\python35\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\DELL\\AppData\\Local\\Temp\\pip-build-2nfj5t60\\Twisted\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\DELL\AppData\Local\Temp\pip-0bjk1w93-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\DELL\AppData\Local\Temp\pip-build-2nfj5t60\Twisted\
I am not able to get the error. |
Cant enable fail2ban jail sshd | 49,656,117 | 0 | 2 | 4,535 | 0 | python,centos6,systemd,fail2ban | I was able to fix this by editing the paths-common.conf file from:
default_backend = %(default/backend)s
to:
default_backend = pynotify or default_backend = auto | 0 | 1 | 0 | 0 | 2017-02-18T21:41:00.000 | 3 | 0 | false | 42,320,994 | 0 | 0 | 0 | 1 | When enabled sshd jail i see Starting fail2ban: ERROR NOK: ("Failed to initialize any backend for Jail 'sshd'",)
ERROR NOK: ('sshd',)
In logs :
ERROR Backend 'systemd' failed to initialize due to No module named systemd
ERROR Failed to initialize any backend for Jail 'sshd'
Centos 6.7 no have systemd module .
CentOS 6.7, python 2.6 |
How to detect backup drive in windows using python? | 42,328,184 | 1 | 0 | 130 | 0 | python | Expanding on my comment
Is there a way to do that in python?
I think the short answer is: No. Not in Python, not in another language.
I want to write a python script that detects the backup drive
I don't think there's a way to do this. There's nothing inherent to a drive that could be used to detect whether a connected drive is intended by the user to be a "backup" drive or for something else. In other words, whether a drive is a "backup" drive or not is determined by the user's behavior, not by the properties of the drive itself.
There're flags that can be set when a drive gets formatted (e.g. whether it's a bootable drive or not, etc), but that's about it.
If we're talking about a method that's intended for your personal use only, then something that might work is the following:
Create a naming convention for your drives (i.e. their labels when formatting), such as making sure your backup drives have the word "backup" somewhere in it;
Make sure you never deviate from this naming convention;
Write a program that will iterate over your drives looking for the word "backup" in their names (a simple regular expression would work).
Obviously, this would only work as long as the convention is followed. This is not a solution that you can arbitrarily apply in other situations where this assumption does not hold.
makes sure it is an external disk not thumbdrive or dvdrom.
This one might be tricky. If you connect an external HDD into a USB plug, the system would know the drive's capacity and the fact that it's connected through the USB interface, but I think that's about it. | 0 | 1 | 0 | 0 | 2017-02-19T13:50:00.000 | 2 | 0.099668 | false | 42,328,024 | 0 | 0 | 0 | 1 | I have 2 drives, OS drive and the backup drive in windows.
I want to write a python script that detects the backup drive and returns the letter it is assigned and makes sure it is an external disk not thumbdrive or dvdrom. The letter assigned to the drive can vary.
Is there a way to do that in python? I have been searching through but to no avail. |
Python module in 'dist-packages' vs. 'site-packages' | 42,436,834 | 0 | 2 | 5,617 | 0 | python,debian | From what I learnt from IRC is that I should install the modules in 'dist-packages' only, assuming that the admin would have installed the python provided by Ubuntu repo only. | 0 | 1 | 0 | 0 | 2017-02-20T07:46:00.000 | 3 | 1.2 | true | 42,339,034 | 1 | 0 | 0 | 2 | I am building a deb package from source. The source used to install the modules in 'site-packages' in RHEL.
On Ubuntu, 'site-packages' doesn't work for me. Searching over the net, it says that python Ubuntu would require it in 'dist-packages'
But there are also references that python built from source would look in 'site-packages'
Now I am confused, where should my deb packages install the modules so that it works irrespective of python built from source or installed from Ubuntu repo |
Python module in 'dist-packages' vs. 'site-packages' | 46,771,967 | 11 | 2 | 5,617 | 0 | python,debian | dist-packages is a Debian convention that is present in distros based on Debian. When we install a package using the package manager like apt-get these packages are installed to dist-packages. Likewise, if you install using pip and pip is installed via package manager then these packages will be installed in dist-packages.
If you build python from source then pip comes with it, now if you install a package using this pip it'll be installed into site-packages.
So It depends on which python binary you are using if you are using the binary that comes from package manager it will search in dist-packages and if you are using a binary from manual install it'll search in site-packages. | 0 | 1 | 0 | 0 | 2017-02-20T07:46:00.000 | 3 | 1 | false | 42,339,034 | 1 | 0 | 0 | 2 | I am building a deb package from source. The source used to install the modules in 'site-packages' in RHEL.
On Ubuntu, 'site-packages' doesn't work for me. Searching over the net, it says that python Ubuntu would require it in 'dist-packages'
But there are also references that python built from source would look in 'site-packages'
Now I am confused, where should my deb packages install the modules so that it works irrespective of python built from source or installed from Ubuntu repo |
is Python called differently via Jenkins? | 42,340,973 | 1 | 0 | 859 | 0 | python,jenkins | Jenkins is running your jobs as a different user, and typically on a different host (unless you let your Jenkins run on your local host and don't use slaves to run your jobs). Resulting from these two aspects you will have also a different environment (variables like HOME, PATH, PYTHONPATH, and all the other environment stuff like locales etc.).
To find out the host, let a shell in the job execute hostname.
To find out the Jenkins user, let a shell in the job execute id.
To find out the environment, let a shell in the job execute set (which will produce a lot of output).
My guess would be that in your case the modules you are trying to use are not installed on the Jenkins host. | 0 | 1 | 0 | 1 | 2017-02-20T08:33:00.000 | 1 | 0.197375 | false | 42,339,728 | 0 | 0 | 0 | 1 | I did notice something strange on my python server running jenkins. Basically if I run a script, which has dependencies (I use python via Brew), from console, it works fine.
But when I run it via Jenkins, I get an error because that package was not found.
When I call the script, I use python -m py.test -s myscript.py
Is there a gotcha when using Jenkins, and call python as I do? I would expect that a command called in the bash section of Jenkins, would execute as if it was running in console, but from the result that I get, it seems that is not true.
When I check for which python, I get back /usr/local/bin/python; which has the symlink to the brew version. If I echo $PYTHONPATH I get back the same path.
One interesting thing though, is that if on Jenkins I call explicitly either /usr/local/bin/python -m or /usr/bin/python ; I get an error saying that there is no python there; but if I just use python -m, it works. This makes no sense to me. |
Unable to run pyspark | 42,718,191 | 1 | 22 | 21,967 | 0 | python,pyspark | The Possible Issues faced when running Spark on Windows is, of not giving proper Path or by using Python 3.x to run Spark.
So,
Do check Path Given for spark i.e /usr/local/spark Proper or Not.
Do set Python Path to Python 2.x (remove Python 3.x). | 0 | 1 | 0 | 0 | 2017-02-20T16:45:00.000 | 5 | 0.039979 | false | 42,349,980 | 0 | 1 | 0 | 1 | I installed Spark on Windows, and I'm unable to start pyspark. When I type in c:\Spark\bin\pyspark, I get the following error:
Python 3.6.0 |Anaconda custom (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Traceback (most recent call last): File "c:\Spark\bin..\python\pyspark\shell.py", line 30, in import pyspark File "c:\Spark\python\pyspark__init__.py", line 44, in from pyspark.context import SparkContext File "c:\Spark\python\pyspark\context.py", line 36, in from pyspark.java_gateway import launch_gateway File "c:\Spark\python\pyspark\java_gateway.py", line 31, in from py4j.java_gateway import java_import, JavaGateway, GatewayClient File "", line 961, in _find_and_load File "", line 950, in _find_and_load_unlocked File "", line 646, in _load_unlocked File "", line 616, in _load_backward_compatible File "c:\Spark\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py", line 18, in File "C:\Users\Eigenaar\Anaconda3\lib\pydoc.py", line 62, in import pkgutil File "C:\Users\Eigenaar\Anaconda3\lib\pkgutil.py", line 22, in ModuleInfo = namedtuple('ModuleInfo', 'module_finder name ispkg') File "c:\Spark\python\pyspark\serializers.py", line 393, in namedtuple cls = _old_namedtuple(*args, **kwargs) TypeError: namedtuple() missing 3 required keyword-only arguments: 'verbose', 'rename', and 'module'
what am I doing wrong here? |
Can I generate Enterprise Architect diagrams from Python? | 42,353,157 | 3 | 3 | 1,220 | 0 | python,enterprise-architect | EA is using a RDBMS to store it's repository. In the simplest case, this is a MS Access database renamed to .EAP. You can modify this RDBMS directly, but only if you know what you're doing. The recommended way is to use the API. Often a mix of both is the preferred way. You can use Python in both cases without issues.
Shameless self plug: I have published books about EA's internal and also its API on LeanPub. | 0 | 1 | 0 | 1 | 2017-02-20T17:15:00.000 | 1 | 1.2 | true | 42,350,592 | 0 | 0 | 0 | 1 | One of the first things I do on a new project is to knock up a quick script to parse a log file and generate a message sequence chart, as I believe that picture is worth a thousand words.
New project, and it is mandated that we use only Enterprise Architect. I have no idea what its save file format is.
Is it possible to generate a file which will open in EA from Python?
If so, where can I find an example or a tutorial? |
Run shell script inside Docker container from another Docker container? | 42,352,587 | 1 | 2 | 775 | 0 | python,shell,ubuntu,docker | Simply launch your container with something like
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker ...
and it should do the trick | 0 | 1 | 0 | 1 | 2017-02-20T18:44:00.000 | 1 | 1.2 | true | 42,352,104 | 0 | 0 | 0 | 1 | If I am on my host machine, I can kickoff a script inside a Docker container using:
docker exec my_container bash myscript.sh
However, let's say I want to run myscript.sh inside my_container from another container bob. If I run the command above while I'm in the shell of bob, it doesn't work (Docker isn't even installed in bob).
What's the best way to do this? |
gcloud.py attributeError: module 'enum' has no attribute 'Int Flag' | 45,141,694 | 0 | 3 | 4,609 | 0 | python,google-cloud-sdk | Ensure two things:
While installing the GoogleCloudSDK, check the 'BundledPython' option. It will install both python and python3.
Make sure your environment variable-PYTHONPATH is pointing to the directory having the python.exe file.
This worked out for me. | 0 | 1 | 0 | 0 | 2017-02-20T21:16:00.000 | 3 | 0 | false | 42,354,363 | 1 | 0 | 0 | 1 | I get this error message when running install.bat (or install.sh through 'bash' shell) of google-cloud-sdk.
Python is version 3.6.
Any suggestions? |
Having other subprocess in queue wait until a certain flag is set | 42,402,047 | 0 | 0 | 47 | 0 | python,parallel-processing,queue,multiprocessing | So, we went ahead with creating n number of processes instead of having a suspender. This would not be the ideal approach, but for the time being; it solves the issue at hand.
I'd still love a better method to achieve the same. | 0 | 1 | 0 | 0 | 2017-02-21T14:52:00.000 | 1 | 1.2 | true | 42,370,620 | 0 | 0 | 0 | 1 | The problem is still on the drawing board so far, so I can go for another better suited approach. The situation is like this:
We create a queue of n-processes, each of which execute independently of the other tasks in the queue itself. They do not share any resources etc. However, we noticed that sometimes (depending on queue parameters) a process k's behaviour might depend on existence of a flag specific to k+1 process. This flag is to be set in a DynamoDB table, and therefore; the execution could fails.
What I am currently searching around for is a method so that I can set some sort of waiters/suspenders in my tasks/workers so that they poll until the flag is set in the DynamoDB table, and meanwhile let the other subprocess take up the CPU.
The setting of this boolean value is done a little early in the processes themselves. The dependent part of the process comes much later. |
How to install pip for a specific python version | 62,927,458 | 1 | 8 | 17,626 | 0 | python,python-3.x,pip | I have python 3.6 and 3.8 on my Ubuntu 18.04 WSL machine. Running
sudo apt-get install python3-pip
pip3 install my_package_name
kept installing packages into Python 3.6 dist directories. The only way that I could install packages for Python 3.8 was:
python3.8 -m pip install my_package_name
That installed appropriate package into the Python 3.8 dist package directory so that when I ran my code with python3.8, the required package was available. | 0 | 1 | 0 | 0 | 2017-02-21T15:26:00.000 | 4 | 0.049958 | false | 42,371,406 | 1 | 0 | 0 | 1 | I have my deployment system running CentOS 6.
It has by default python 2.6.6 installed. So, "which python" gives me /usr/bin/python (which is 2.6.6)
I later installed python3.5, which is invoked as python3 ("which python3" gives me /usr/local/bin/python3)
Using pip, I need to install a few packages that are specific to python3. So I did pip install using:-
"sudo yum install python-pip"
So "which pip" is /usr/bin/pip.
Now whenever I do any "pip install", it just installs it for 2.6.6. :-(
It is clear that pip installation got tied to python 2.6.6 and invoking pip later, only installs packages for 2.6.6.
How can I get around this issue? |
Automate ssh commands with python | 42,375,591 | -1 | 4 | 9,599 | 0 | python,linux,ssh | If these manual stuffs is too many, then I may look into some server configuration managements like Ansible.
I have done this kinda automation using:
Ansible
Python Fabric
Rake | 0 | 1 | 1 | 1 | 2017-02-21T18:40:00.000 | 3 | -0.066568 | false | 42,375,396 | 0 | 0 | 0 | 1 | So everyday, I need to login to a couple different hosts via ssh and run some maintenance commands there in order for the QA team to be able to test my features.
I want to use a python script to automate such boring tasks. It would be something like:
ssh host1
deploy stuff
logout from host1
ssh host2
restart stuff
logout from host2
ssh host3
check health on stuff
logout from host3
...
It's killing my productivity, and I would like to know if there is something nice, ergonomic and easy to implement that can handle and run commands on ssh sessions programmatically and output a report for me.
Of course I will do the code, I just wanted some suggestions that are not bash scripts (because those are not meant for humans to be read). |
Getting this error when running python 3 version on macOS sierra 10.12 | 42,405,571 | 0 | 0 | 141 | 0 | python | It looks like you are using the python of the system. I am myself on macOS and I went crazy several times with Apple tricks. I strongly advise to install python with anaconda, it is very simple and then you can try as many environments you want with different versions of python and of the modules. And you have a much better control.
Sorry if this is not a fully documented answer, it is more like a comment but I do not have permission to give comments anymore (reputation loss due to a bounty). I hope this helps. | 0 | 1 | 0 | 0 | 2017-02-23T01:19:00.000 | 1 | 0 | false | 42,405,357 | 1 | 0 | 0 | 1 | Please any one help me, I am reading a fasta file through python3.6 or 3.5 on my macOS sierra and getting this error but code working properly when running on windows machine with python 3.5.2.
Please any one tell me what's the actual problem.
I install twice python on my mac but nothing works.
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd0 in position 647: invalid continuation byte |
How to combine hadoop mappers output to get single result | 42,406,958 | 1 | 1 | 750 | 0 | python,hadoop,mapreduce | your job is generating 1 file per mapper, you have to force a reducer phase using 1 reducer to do this, you can accomplish this emitting the same key in all the mappers. | 0 | 1 | 1 | 0 | 2017-02-23T03:42:00.000 | 3 | 0.066568 | false | 42,406,596 | 0 | 0 | 0 | 2 | I have about 170 GB data. I have to analyze it using hadoop 2.7.3. There are 14 workers. I have to find total of unique MIME type of each document e.g. total number of documents that are text/html type. When I run mapreduce job(written in python), Hadoop returns many output files instead of single one that I am expecting. I think this is due to many workers that process some data seprately and give output. I want to get single output. Where is the problem. How I can restrict hadoop to give single output (by combining all small output files). |
How to combine hadoop mappers output to get single result | 42,414,979 | 1 | 1 | 750 | 0 | python,hadoop,mapreduce | Make your mapper to emit for each document processed - (doc-mime-type, 1) then count up all such pairs at reduce phase. In essence, it is a standard word count exercise except your mappers emit 1s for each doc's mime-type.
Regarding number of reducers to set: Alex's way of merging reducers' results is preferable as allows to utilize all your worker nodes at reduce stage. However, if job to be run on 1-2 nodes then just one reducer should work fine. | 0 | 1 | 1 | 0 | 2017-02-23T03:42:00.000 | 3 | 0.066568 | false | 42,406,596 | 0 | 0 | 0 | 2 | I have about 170 GB data. I have to analyze it using hadoop 2.7.3. There are 14 workers. I have to find total of unique MIME type of each document e.g. total number of documents that are text/html type. When I run mapreduce job(written in python), Hadoop returns many output files instead of single one that I am expecting. I think this is due to many workers that process some data seprately and give output. I want to get single output. Where is the problem. How I can restrict hadoop to give single output (by combining all small output files). |
lftp pause and resume download | 42,412,125 | 5 | 4 | 3,150 | 0 | python-2.7,lftp | You can either suspend the whole lftp process (command suspend) or limit transfer rate to e.g. 1Bps (set net:limit-total-rate 1). In either case the files being transferred remain open.
You can also stop the transfer and continue it later using -c option of get or mirror. | 0 | 1 | 0 | 0 | 2017-02-23T06:05:00.000 | 1 | 0.761594 | false | 42,408,247 | 0 | 0 | 0 | 1 | Googled around and looked on this forum but couldn't find if I can pause a download using lftp.
Currently downloading tons of logs and would like to pause, add more drives to the system and continue downloading.
Thanks |
Allow a python file to add to a different file linux | 42,446,617 | 0 | 0 | 31 | 0 | python,html,linux,ubuntu | I'm going to answer your question but also beg you to consider another approach.
The functionality you are looking for is usually handled by a database. If you don't want to use anything more complex, SQLite is often all you need. You would then need a simple web application that connects to the database, grabs the fields, and then injects them into HTML.
I'd use Flask for this as it comes with Jinja and that's a pretty simple stack to get started with.
If you really want to edit the HTML file directly in Python, you will need write permissions for whatever user is running the Python script. On Ubuntu, that folder is typically owned by www-data if you are running Apache.
Then you'd open the file in Python, perform file operations on it, and then close it.
with open("/var/www/html/somefile.txt", "a") as myfile:
myfile.write("l33t h4x0r has completed the challenge!\n")
That's an example of how you'd do a simple append operation in Python. | 0 | 1 | 0 | 0 | 2017-02-24T19:04:00.000 | 1 | 1.2 | true | 42,446,403 | 0 | 0 | 0 | 1 | I'm making a "wargame" like the ones on overthewire.org or smashthestack.org. When you finish the game, the user should get a python program that has extra permissions to edit a file in /var/www/html so that they can sign their name. I want to have a program like this so that they can add text to the html file without removing the text of other users and so that it filters offensive words.
How can I make a file editable by a specific program in Linux? And how can I make the program edit the file in python? Do I just use os.system? |
Uniquely Identify Computer, prevents hackers | 42,464,260 | 0 | 0 | 104 | 0 | python,security,uuid | There are probably far simpler and more effective ways to DOS / DDOS your server. Bear that in mind when you decide home much effort to expend on this.
Here are a couple of ideas that may be (partially) effective.
Rate limit the creation of UUIDs ... globally. If you do this, and monitor how close you are to the point where your DB is full, you can keep ahead of that potential DOS vector.
Severely rate limit the UUIDs created by any given client IP address. However, you need to be careful with this. In many / most cases you won't see the real client IP address because of HTTP proxies, NATing and so on.
There are actually a number of ways to rate limit requests.
You can count the requests, and refuse them when the count in a given interval exceeds a given threshold.
You can record the time since the last request, and refuse requests when the interval is too small. (This is a degenerate version of the previous one.)
You can simply service the requests slowly; e.g. put them into a queue and process them at a fixed rate.
However, you also need to beware that your defenses don't create an alternative DDOS mechanism; e.g. hammering the server with UUID requests to prevent real users from getting UUIDs. | 0 | 1 | 0 | 0 | 2017-02-26T02:41:00.000 | 1 | 1.2 | true | 42,464,131 | 0 | 0 | 0 | 1 | I fully recognize that the answer to this question may be "No."
I am writing the client portion of a client-server program that will run on potentially thousands of computers and will periodically report back to the server with system settings and configurations. When the computer first initiates, currently the client code independently generates a UUID value, and reports back to the server with that ID to uniquely identify itself. The server uses this ID number for identify a machine, even when the IP address and other associated data changes.
While each session is protected via TLS, a hacker could trivially identify the protocol and spam the server with thousands of new UUID values, tricking the server into thinking there are an exponential number of new machines on the network - which would eventually fill up the DB and trigger a DoS condition.
Any ideas on how to uniquely identify a server/workstation such that even a hacker could not create "phantom" machines?
Any ideas? Again, I fully understand that the answer may very well be "No".
Using the TPM chip is not an option, primarily because not all machines, architectures or OSs will allow for this option. |
How do I print colored text in IDLE's terminal? | 61,460,514 | 0 | 8 | 19,362 | 0 | python,python-3.x,colors,python-idle | The strnage output of the length is with the return keyword, and NORMAL is also a color | 0 | 1 | 0 | 0 | 2017-02-26T19:14:00.000 | 5 | 0 | false | 42,472,958 | 1 | 0 | 0 | 1 | This is very easily a duplicate question--because it is. However, there are very many inadequate answers to this (e.g. try curses! -- pointing to a 26 page documentation).
I just want to print text in a color other than blue when I'm outputting in IDLE. Is it possible? What's an easy way to do this? I'm running Python 3.6 on Windows.
Please explain with an example.
(I have found that ANSI codes do not work inside IDLE, only on the terminal.) |
Error 193 %1 is not a valid Win32 application | 42,478,265 | 6 | 3 | 4,749 | 0 | python,dll | If you are using a 32 bit Python and the DLL is a 64 bit DLL you will get this error, likewise if the DLL is 32 bit and your Python is 64 bit.
You can check this using the dumpbin /HEADERS <dll filepath> command from a visual studio command prompt. | 0 | 1 | 0 | 0 | 2017-02-27T04:42:00.000 | 1 | 1 | false | 42,477,956 | 0 | 0 | 0 | 1 | I found this error [Error 193] %1 is not a valid Win32 application when i run this python command windll.LoadLibrary("C:\Windows\System32\plcommpro.dll")
For this error i found my plcommpro.dll file is not executable file.But I don't know how to make it as a executable file.If someone knows please share me.
Thanks and Best. |
Run a python script as a background service in linux | 42,500,440 | 0 | 0 | 1,140 | 0 | python | You can create init script in /etc/init/ directory
Example:
start on runlevel [2345]
stop on runlevel [!2345]
kill timeout 5
respawn
script
exec /usr/bin/python /path/to/script.py
end script
Save with .conf extension | 0 | 1 | 0 | 0 | 2017-02-28T04:06:00.000 | 2 | 0 | false | 42,500,030 | 0 | 0 | 0 | 1 | I am currently using linux. I have a python script which I want to run as a background service such as the script should start to run when I start my machine.
Currently I am using python 2.7 and the command 'python myscripy.py' to run the script.
Can anyone give an idea about how to do this.
Thank you. |
Run Python 3.6 in Terminal on Mac? | 49,884,463 | 0 | 2 | 6,719 | 0 | python,terminal,version-control | as usual in Mac python 2.7 is already installed, however if you installed python 3+
then you can just type in terminal: python3
so that you can use the newer version that you installed.
if you want to use python 2.7 then just type: python | 0 | 1 | 0 | 0 | 2017-02-28T07:26:00.000 | 5 | 0 | false | 42,502,614 | 1 | 0 | 0 | 2 | I am using a Python on a Mac and I know that Python 2 comes preinstalled on the system (and in fact usable through Terminal). Is there a way to make it so Terminal can run Python 3? Can/should you set this as a default? I know changing the default settings for Python version usage could break your system so should I just install Python 3 and then use it through its launch icon instead? |
Run Python 3.6 in Terminal on Mac? | 42,502,774 | 0 | 2 | 6,719 | 0 | python,terminal,version-control | Best option is to install Python through Anaconda. This allows easy management and much more. You can have virtual environments having different Python versions as well as different modules installed. | 0 | 1 | 0 | 0 | 2017-02-28T07:26:00.000 | 5 | 0 | false | 42,502,614 | 1 | 0 | 0 | 2 | I am using a Python on a Mac and I know that Python 2 comes preinstalled on the system (and in fact usable through Terminal). Is there a way to make it so Terminal can run Python 3? Can/should you set this as a default? I know changing the default settings for Python version usage could break your system so should I just install Python 3 and then use it through its launch icon instead? |
Make python process writes be scheduled for writeback immediately without being marked dirty | 42,558,956 | 0 | 2 | 46 | 0 | python,linux,numpy,linux-kernel | So heres one way I've managed to do it.
By using the numpy memmap object you can instantiate an array that directly corresponds with a part of the disk. Calling the method flush() or python's del causes the array to sync to disk, completely bypassing the OS's buffer. I've successfully written ~280GB to disk at max throughput using this method.
Will continue researching. | 0 | 1 | 0 | 0 | 2017-02-28T23:15:00.000 | 2 | 0 | false | 42,520,522 | 0 | 0 | 0 | 2 | We are building a python framework that captures data from a framegrabber card through a cffi interface. After some manipulation, we try to write RAW images (numpy arrays using the tofile method) to disk at a rate of around 120 MB/s. We are well aware that are disks are capable of handling this throughput.
The problem we were experiencing was dropped frames, often entire seconds of data completely missing from the framegrabber output. What we found was that these framedrops were occurring when our Debian system hit the dirty_background_ratio set in sysctl. The system was calling the flush background gang that would choke up the framegrabber and cause it to skip frames.
Not surprisingly, setting the dirty_background_ratio to 0% managed to get rid of the problem entirely (It is worth noting that even small numbers like 1% and 2% still resulted in ~40% frame loss)
So, my question is, is there any way to get this python process to write in such a way that it is immediately scheduled for writeout, bypassing the dirty buffer entirely?
Thanks |
Make python process writes be scheduled for writeback immediately without being marked dirty | 52,653,381 | 0 | 2 | 46 | 0 | python,linux,numpy,linux-kernel | Another option is to get the os file id and call os.fsync on it. This will schedule it for writeback immediately. | 0 | 1 | 0 | 0 | 2017-02-28T23:15:00.000 | 2 | 1.2 | true | 42,520,522 | 0 | 0 | 0 | 2 | We are building a python framework that captures data from a framegrabber card through a cffi interface. After some manipulation, we try to write RAW images (numpy arrays using the tofile method) to disk at a rate of around 120 MB/s. We are well aware that are disks are capable of handling this throughput.
The problem we were experiencing was dropped frames, often entire seconds of data completely missing from the framegrabber output. What we found was that these framedrops were occurring when our Debian system hit the dirty_background_ratio set in sysctl. The system was calling the flush background gang that would choke up the framegrabber and cause it to skip frames.
Not surprisingly, setting the dirty_background_ratio to 0% managed to get rid of the problem entirely (It is worth noting that even small numbers like 1% and 2% still resulted in ~40% frame loss)
So, my question is, is there any way to get this python process to write in such a way that it is immediately scheduled for writeout, bypassing the dirty buffer entirely?
Thanks |
Designing a pinging service | 42,525,826 | 0 | 1 | 45 | 1 | python,architecture | Let me put it like this. You will be having these 4 statements in the following way. In the simplest way you could keep a table of users and a table of hostnames which will have following columns -> fk to users, hostname, last update and boolean is_running.
You will need the following actions.
UPDATE:
You will run this periodically on the whole table. You could optimize this by using a select with a filter on the last update column.
INSERT and DELETE:
This is for when the user adds or removes hostnames. During inserting also ping the hostname and update the last update column as the current time.
For the above 3 operations whenever they run they'd be using a lock on the respective rows. After each of the latter 2 operations you could notify the user.
Finally the READ:
This is whenever the user wants to see the status of his hostnames. If he has added or removed a hostname recently he will be notified only after the commit.
Otherwise do a select * from hostnames where user.id = x and send him the result. Everytime he hits refresh you could run this query.
You could also put indices on both the tables as the read operation is the one that has to be fastest. You could afford slightly slower times on the other 2 operations.
Do let me know if this works or if you've done differently. Thank you. | 0 | 1 | 0 | 0 | 2017-03-01T06:01:00.000 | 1 | 0 | false | 42,524,336 | 0 | 0 | 0 | 1 | The ping service that I have in mind allows users to keep easily track of their cloud application (AWS, GCP, Digital Ocean, etc.) up-time.
The part of the application's design that I am having trouble with is how to effectively read a growing/shrinking list of hostnames from a database and ping them every "x" interval. The service itself will be written in Python and Postgres to store the user-inputted hostnames. Keep in mind that the list of hostnames to ping is variable since a user can add and also remove hostnames at will.
How would you setup a system that checks for the most up-to-date list of hostnames, executes pings across said list of hostnames, and store the results, at a specific interval?
I am pretty new to programming. Any help or pointers in the right direction will be greatly appreciated |
How do I install a python deb package for Python3 on Ubuntu? | 42,525,853 | 0 | 0 | 1,157 | 0 | python,python-3.x,ubuntu | You can use sudo dpkg panda3d1.9_1.9.3-xenial_amd64.deb it won't affect your default package. | 0 | 1 | 0 | 0 | 2017-03-01T07:17:00.000 | 2 | 0 | false | 42,525,463 | 1 | 0 | 0 | 2 | I downloaded a deb package panda3d1.9_1.9.3-xenial_amd64.deb and I want to install it for Python 3. My OS is Linux Ubuntu 16.04. The default python is 2.7.12 and I would prefer to keep it as default, but Python 3 is installed too and available to use. How do I install this package for Python 3 only?
I am not sure pip may help. |
How do I install a python deb package for Python3 on Ubuntu? | 42,528,081 | 1 | 0 | 1,157 | 0 | python,python-3.x,ubuntu | If the package was built to only support Python 2, there is no straightforward way to install it for Python 3. You will want to ask the packager to provide a package built for Python 3 if there isn't one already.
(This replaces my earlier answer, which was incorrect or at least misleading. Thanks to @Goyo in particular for setting me straight.) | 0 | 1 | 0 | 0 | 2017-03-01T07:17:00.000 | 2 | 1.2 | true | 42,525,463 | 1 | 0 | 0 | 2 | I downloaded a deb package panda3d1.9_1.9.3-xenial_amd64.deb and I want to install it for Python 3. My OS is Linux Ubuntu 16.04. The default python is 2.7.12 and I would prefer to keep it as default, but Python 3 is installed too and available to use. How do I install this package for Python 3 only?
I am not sure pip may help. |
IBM Bluemix IoT Watson service Driver Behaviour | 42,668,207 | 1 | 0 | 114 | 0 | python,ibm-cloud,iot-for-automotive,iot-driver-behavior | I think your procedure is OK.
There are following possibilities not to get valid analysis result.
(1) In current Driving Behavior Analysis, it requires at least 10 valid gps points within a trip (trip_id) on a vehicle (trip_id). Please check your data which is used on "sendCarProbe" API.
(2) Please check "sendJobRequest" API's from and to date (yyyy-mm-dd) really matches with your car probe timestamp. | 0 | 1 | 0 | 0 | 2017-03-02T11:04:00.000 | 1 | 1.2 | true | 42,553,676 | 0 | 0 | 0 | 1 | I want to explore and use the driver behavior service in my application. Unfortunately I got stuck as I'm getting empty response from getAnalyzedTripSummary API instead Trip UUID.
Here are the steps I've followed.
I've added the services called Driver behavior and Context Mapping to my application @Bluemix.
Pushed multiple sample data packets to the Driver Behavior using "sendCarProbe" API
Sent Job Request using "sendJobRequest" API with from and to dates as post data.
Tried "getJobInfo" API, which results the status of job "job_status" : "SUCCEEDED")
Tried "getAnalyzedTripSummaryList" to get trip_uuid. But
its resulting empty. []
Could someone help me to understand what's wrong and why I'm getting empty response? |
Python script very slow in a remote directory | 43,487,256 | -1 | 0 | 1,102 | 0 | python,import,sshfs,fedora-25,remote-host | In my case it were the Cern ROOT libraries import. When importing, they look in the current directory, no matter what I do. So the solution is to
store the current directory
cd to some really local directory, like "/" or "/home" before imports
come back to the stored directory after imports | 0 | 1 | 0 | 1 | 2017-03-03T04:07:00.000 | 1 | -0.197375 | false | 42,570,494 | 0 | 0 | 0 | 1 | I have trouble running my a little bit complex python program in a remote directory, mounted by SSHFS. It takes a few seconds to perform imports when executing in a remote directory and a fraction of a second in a local directory. The program should not access anything in the remote directory on its own, especially in the import phase.
By default, there is current (remote) directory I sys.path, but when I remove it before (other) imports, speed does not change. I confirmed with python -vv that this remote directory is not accessed in the process of looking for modules. Still, I can see a stable flow of some data from the network with an external network monitor during the import phase.
Moreover, I can't really identify what exectly it is doing when consuming most time. It seems to happen after one import is finished, according to my simple printouts, and before a next import is started...
I'm running Fedora 25 Linux |
how to switch python interpreter in cmd? | 42,583,156 | 0 | 0 | 10,377 | 0 | python,python-2.7,python-3.x | In my case, /usr/bin/python is a symlink that points to /usr/bin/python2.7.
Ususally, there is a relevant symlink for python2 and python3.
So, if you type python2 you get a python-2 interpreter and if you type python3 you get a python-3 one. | 0 | 1 | 0 | 0 | 2017-03-03T15:46:00.000 | 7 | 0 | false | 42,583,082 | 1 | 0 | 0 | 5 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? |
how to switch python interpreter in cmd? | 42,583,347 | 0 | 0 | 10,377 | 0 | python,python-2.7,python-3.x | It depends on OS (and the way Python has been installed).
For most current installations:
on Windows, Python 3.x installs a py command in the path that can be used that way:
py -2 launches Python2
py -3 launches Python3
On Unix-likes, the most common way is to have different names for the executables of different versions (or to have different symlinks do them). So you can normally call directly python2.7 or python2 to start that version (and python3 or python3.5 for the alternate one). By default only a part of all those symlinks can have been installed but at least one per version. Search you path to find them | 0 | 1 | 0 | 0 | 2017-03-03T15:46:00.000 | 7 | 0 | false | 42,583,082 | 1 | 0 | 0 | 5 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? |
how to switch python interpreter in cmd? | 71,958,209 | 0 | 0 | 10,377 | 0 | python,python-2.7,python-3.x | As has been mentioned in other answers to this and similar questions, if you're using Windows, cmd reads down the PATH variable from the top down. On my system I have Python 3.8 and 3.10 installed. I wanted my cmd to solely use 3.8, so I moved it to the top of the PATH variable and the next time I opened cmd and used python --version it returned 3.8.
Hopefully this is useful for future devs researching this specific question. | 0 | 1 | 0 | 0 | 2017-03-03T15:46:00.000 | 7 | 0 | false | 42,583,082 | 1 | 0 | 0 | 5 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? |
how to switch python interpreter in cmd? | 42,583,188 | 0 | 0 | 10,377 | 0 | python,python-2.7,python-3.x | Usually on all major operating systems the commands python2 and python3 run the correct version of Python respectively. If you have several versions of e.g. Python 3 installed, python32 or python35 would start Python 3.2 or Python 3.5. python usually starts the lowest version installed I think.
Hope this helps! | 0 | 1 | 0 | 0 | 2017-03-03T15:46:00.000 | 7 | 0 | false | 42,583,082 | 1 | 0 | 0 | 5 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? |
how to switch python interpreter in cmd? | 42,583,177 | 0 | 0 | 10,377 | 0 | python,python-2.7,python-3.x | If you use Windows OS:
py -2.7 for python 2.7
py -3 for python 3.x
But first you need to check your PATH | 0 | 1 | 0 | 0 | 2017-03-03T15:46:00.000 | 7 | 0 | false | 42,583,082 | 1 | 0 | 0 | 5 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? |
How to make auto indention in nano while programming in python in linux? | 70,208,117 | 0 | 7 | 18,957 | 0 | python,linux,nano | Try just "M-I" (Esc-I) to switch off autoindent before pasting with Ctrl-Ins (or right mouse click) | 0 | 1 | 0 | 0 | 2017-03-03T16:58:00.000 | 3 | 0 | false | 42,584,551 | 1 | 0 | 0 | 1 | I am a beginner programmer as well as linux user. Before, I was using windows and the python IDLE was so good. I need not needed to press tab button after the "If" statement or any other loops.
Now, I am using Linux and started to write programs through the command line text editor of ubuntu called as "nano". Here, I need to press tab all the time i use "if" statement. It is very tedious. Especially when there is bunch of nested loops, it becomes difficult to remember the tabs count. And i was thinking if there was any idea to make it work like in the IDLE in windows. I also tried to google the problem but i couldn't explain it in few words. I hope you've got what my problem actually is. And i need a descent solution for this. |
ANSI color lost when using python subprocess | 42,589,699 | 11 | 11 | 2,656 | 0 | python,subprocess | Processes that produce color output do it by sending escape codes to the terminal(-emulator) intermixed with the output. Programs that handle the output of these programs as data would be confused by the escape codes, so most programs that produce color output on terminals do so only when they are writing to a terminal device. If the program's standard output is connected to a pipe rather than a terminal device, they don't produce the escape codes. When Python reads the output of a sub-process, it does it through a pipe, so the program you are calling in a sub-process is not outputting escape codes.
If all you are doing with the output is sending it to a terminal, you might want the escape codes so the color is preserved. It's possible that your program has a command-line switch to output escape codes regardless of the output device. If it does not, you might run your sub-process against a virtual terminal device instead of a pipe to have it output escape codes; which is too complex a topic to delve into in this answer. | 0 | 1 | 0 | 0 | 2017-03-03T22:32:00.000 | 1 | 1.2 | true | 42,589,584 | 1 | 0 | 0 | 1 | I'm trying to run a process within another python process. The program I run normally has colored output when run in an ANSI terminal emulator. When I have my controlling python program print the output from the sub-process, I don't see any color. The color from the subprocess is lost when I read from it and print to screen.
print(subp.stdout.readline()) |
Installing Python modules | 42,596,435 | 0 | 1 | 614 | 0 | python-3.6 | You need to add the location of the python.exe to your $PATH variable. This depends on your installation location. In my case it is C:\Anaconda3. The default is C:\Python as far as I know.
To edit your path variable you can do the following thing. Go to your Control Panel then search for system. You should see something like: "Edit the system environment variables". Click on this and then click on environment variables in the panel that opened. There you have a list of system variables. You should now look for the Path variable. Now click edit and add the Python path at the end. Make sure that you added a semicolon before adding the path to not mess with your previous configuration. | 0 | 1 | 0 | 0 | 2017-03-04T13:09:00.000 | 1 | 0 | false | 42,596,399 | 1 | 0 | 0 | 1 | I am trying to install the pyperclip module for Python 3.6 on Windows (32 bit). I have looked at various documentations (Python documentation, pypi.python.org and online courses) and they all said the same thing.
1) Install and update pip
I downloaded get-pip.py from python.org and it ran immediately, so pip should be updated.
2) Use the command python -m pip install SomePackage
Okay here is where I'm having issues. Everywhere says to run this in the command line, or doesn't specify a place to run it.
I ran this in the command prompt: python -m pip install pyperclip. But I got the error message "'python' is not recognized as an internal or external command, operable program or batch file.
If I run it in Python 3.6, it says pip is an invalid syntax. Running it in IDLE gives me the same message.
I have no idea where else to run it. I have the pyperclip module in my python folder. It looks like a really simple problem, but I have been stuck on this for ages! |
Appengine Python - How to filter tab content depending if entity has been created | 43,709,266 | 0 | 0 | 29 | 0 | twitter-bootstrap,python-2.7,google-app-engine,google-cloud-datastore,app-engine-ndb | When rendering the page just check if the assessment exists by retrieving it from the Consult (I imagine you store the assessment key inside the Consult).
That's it | 0 | 1 | 0 | 0 | 2017-03-04T15:05:00.000 | 1 | 0 | false | 42,597,497 | 0 | 0 | 1 | 1 | I have a page View-Consult with 4 bootstrap tabs.
There are two entities retrieved from the Datastore on this page (Consult and Assessment). The consult is created first and the assessment later (by a different user).
Note: Consults have a property called "consult_status" that is PENDING before the Assessment is added, and COMPLETED after. This may be useful as a condition.
The properties from the Consult populate the first 3 bootstrap tabs. The Assessment properties are displayed in the 4th tab.
There will be a period where the Assessment has not been completed and the View-Consult page will need to display a message in the 4th tab saying "This consult is currently awaiting assessment. You will be notified by email when it is complete."
How would I create and test for this condition and render the appropriate output inside tab 4, depending if the Assessment is complete or not.
Note also: The Consult and Assessment have the same ID, so perhaps a better condition would be to check if there exists an Assignment with the current Consult ID. If not display message "awaiting assessment". |
User friendly way to distribute Python Application | 46,517,871 | 0 | 1 | 75 | 0 | python,macos,exe,software-distribution,dmg | This is a pretty broad question. The "best way" to distribute any software is to use a software distribution/systems management suite. It takes time to implement but once done the time savings is enormous. There are several suites that will do this; I believe that AirWatch will work as will ThingWorx, Helix Device Cloud, and others. These solutions can do what is called a "required distribution" which will simple force the software down. There's not even a click; the software is just there as of the date you specify.
Now, if you don't want to invest time in a solution like this, then use the MSI format for Windows. That is a superior way to install software - the user double-clicks on the software and, if you've done the MSI a certain way, the install happens. There's a user decision to install which some won't take advantage of. Again, that only works on Windows, sorry. I'm not versed in Mac installations but I'm sure that there's a way to build installers for Mac as well.
If your users get scared during a normal software install, well, you've got a different problem. If they're familiar with computers at all, they've seen software install before and have most likely done it. | 0 | 1 | 0 | 0 | 2017-03-04T19:38:00.000 | 1 | 0 | false | 42,600,500 | 1 | 0 | 0 | 1 | What could be the best way to distribute a python application to both windows and mac user without scaring them away during the installation process?
I'm writing a software which will be of help to my university's students. This software will be used by student of various discipline, a lot from those which have little to no programming background.
It would be best if there are some one click magic happens solution to the installation.
How should I go about doing them? Please advice! |
How to import user-defined python modules in cygwin? | 43,129,047 | 0 | 0 | 378 | 0 | python,cygwin,anaconda,python-module,pythonpath | Python at the startup builds the sys.path using site.py available in the PYTHONHOME directory. I appended to the file, addsitedir(). That worked for me. If there exists a space in the path, use double quotes around the path. | 0 | 1 | 0 | 0 | 2017-03-04T22:28:00.000 | 1 | 0 | false | 42,602,126 | 1 | 0 | 0 | 1 | This might be trivial, but I can't identify the reason for not being able to import user-defined python modules into my python environment. I use Ananconda installation of python in cygwin. I have made entries in bash_profile to append module directory path to PYTHONPATH in this format.
export PYTHONPATH=$PYTHONPATH:"<dirpath>"
dirpath starts with /cygdrive/c/Users/
I have an __init__.py file available in the module directory to identify it is a python package.
Kindly provide your inputs. Thanks. |
How do I change the environment variable LANG from within a Python script? | 42,617,322 | 0 | 1 | 2,637 | 0 | python,linux,encoding,utf-8 | I think you're overdoing it. Python comes with batteries included; just use them.
A correctly configured terminal session has the LANG environment variable set; it describes which encoding the terminal expects as output from programs running in this session.
Python interpreter detects this setting and sets sys.stdout.encoding according to it. It then uses that encoding to encode any Unicode output into a correct byte sequence. (If you're sending a byte sequence, you're on your own, and likely know what you're doing; maybe you're sending a binary stream, not text at all.)
So, if you output your text as Unicode, it must appear correctly automatically, provided that all the characters can be encoded.
If you need a finer control, pick the output encoding, encode with your own error handling, and output the bytes.
You're not in a business of changing the terminal session's settings, unless you're writing a tool specifically to do that. The user has configured the session; your program has to adapt to this configuration, not alter it, if it's a well-behaved program. | 0 | 1 | 0 | 0 | 2017-03-05T01:22:00.000 | 2 | 0 | false | 42,603,373 | 1 | 0 | 0 | 2 | I'm writing a script in python that generates output that contains utf-8 characters, and even though most linux terminals use utf-8 by default, I'm writing the code presuming it isn't in utf-8 (in case the user changed it, for some reason).
From what I tested, os.environ["LANG"] = "en_US.utf-8" does not change the system environment variable, it only changes in the data structure inside Python. |
How do I change the environment variable LANG from within a Python script? | 42,617,801 | 0 | 1 | 2,637 | 0 | python,linux,encoding,utf-8 | It is not clear what you want to see happen when you change the LANG environment. If you want to test your Python code with other character encodings, you will need to set LANG before starting the Python code, as I believe LANG is read when Python first starts.
There might(?) be a function call you can call to change the LANG after Python has started, however if this is for testing purposes I recommend setting it before running the Python code.
An even better approach however would be to change the LANG in your terminal program. So that it has the correct encoding. Although almost everyone should be using UTF8, so I am not really sure you need to test non-UTF8 anymore. | 0 | 1 | 0 | 0 | 2017-03-05T01:22:00.000 | 2 | 0 | false | 42,603,373 | 1 | 0 | 0 | 2 | I'm writing a script in python that generates output that contains utf-8 characters, and even though most linux terminals use utf-8 by default, I'm writing the code presuming it isn't in utf-8 (in case the user changed it, for some reason).
From what I tested, os.environ["LANG"] = "en_US.utf-8" does not change the system environment variable, it only changes in the data structure inside Python. |
CGI Script For Python Webserver | 42,608,590 | 0 | 0 | 74 | 0 | python,bash,nginx,webserver,cgi | Why are you not try to pass the address/location of your file which you want to download as a argument to the class and then use that in the < a href> tag to convert that into the link and implement the download functionality | 0 | 1 | 0 | 1 | 2017-03-05T12:28:00.000 | 2 | 0 | false | 42,608,362 | 0 | 0 | 0 | 1 | I have a simple python webserver but I want to use the CGI script for file download and upload according to client-request .But I couldnt find the any way of adjusting the CGI except using apache2 ,nginx or etc... Is there any way to adjust cgi script to my python webserver with Bash script or with other way ? Can you give me any advice about it? |
Installing AWS elasticbeanstalk command line tool in ubuntu:error The 'awsebcli==3.10.0' distribution was not found and is required by the application | 42,709,943 | 1 | 3 | 418 | 0 | php,python,python-2.7,amazon-web-services,ubuntu | Finally i resolved the issue.
First upgrade the pip and then pip install --upgrade --user awsebcli. | 0 | 1 | 0 | 0 | 2017-03-06T06:57:00.000 | 1 | 1.2 | true | 42,619,331 | 0 | 0 | 0 | 1 | I am trying to install AWS elasticbeanstalk command line tool in my ubuntu machine
Installed with pip install --upgrade --user awsebcli
But when i try to get the eb version with eb --version i got the following error
Traceback (most recent call last): File
"/home/shamon/.local/bin/eb", line 6, in
from pkg_resources import load_entry_point File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line
2927, in
@_call_aside File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line
2913, in _call_aside
f(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line
2940, in _initialize_master_working_set
working_set = WorkingSet._build_master() File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line
635, in _build_master
ws.require(requires) File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line
943, in require
needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line
829, in resolve
raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'awsebcli==3.10.0'
distribution was not found and is required by the application |
Launch console window pre-activated with chcp 65001 using python | 42,631,022 | 7 | 4 | 4,940 | 0 | python,windows,python-3.x,unicode,console | Add /k chcp 65001 to the shortcut launching the cmd window. Alternatively, use Python 3.6 which uses Windows Unicode APIs to write to the console and ignores the code page. You do still need font support for what you are printing, however. | 0 | 1 | 0 | 0 | 2017-03-06T16:04:00.000 | 2 | 1.2 | true | 42,630,191 | 1 | 0 | 0 | 1 | I use a python library that prints out a Unicode character to windows console. If I call a function on the library that prints out Unicode character, it will throw an exception 'charmap' codec can't encode characters.
So this is what I tried to solve that error:
Call "chcp 65001" windows console command from python using os.system("chcp 65001") before calling the library function.
I know there are questions similar to this and that is why I tried the above solution. It successfully calls the command on the console and tells me that it activated the code page.
However, the exception showed up again. if I try to run the program again without closing the previous console, the program executes successfully without any exception. Which means the above console command takes effect after the first try.
My question is: is there a way to launch windows console by pre-activating Unicode support so that I don't have to call the program twice. |
How to run one airflow task and all its dependencies? | 42,646,246 | 7 | 12 | 12,887 | 0 | python-3.6,airflow,airflow-scheduler | You can run a task independently by using -i/-I/-A flags along with the run command.
But yes the design of airflow does not permit running a specific task and all its dependencies.
You can backfill the dag by removing non-related tasks from the DAG for testing purpose | 0 | 1 | 0 | 0 | 2017-03-06T19:22:00.000 | 2 | 1 | false | 42,633,892 | 0 | 0 | 0 | 1 | I suspected that
airflow run dag_id task_id execution_date
would run all upstream tasks, but it does not. It will simply fail when it sees that not all dependent tasks are run. How can I run a specific task and all its dependencies? I am guessing this is not possible because of an airflow design decision, but is there a way to get around this? |
Installing open-cv on Windows 10 with Python 3.5 | 42,637,351 | 0 | 0 | 591 | 0 | python,opencv | You need to update your environment variables.
In search, go to the control panel
Click the Advanced system settings link.
Click Environment Variables. In the section System Variables, find the PYTHONPATH variable.
Click edit, and add the absolute path to your Lib directory | 0 | 1 | 0 | 0 | 2017-03-06T22:54:00.000 | 1 | 0 | false | 42,637,127 | 1 | 0 | 0 | 1 | I'm having some trouble installing open-cv
I've tried several approaches but only succeeded in installing open-cv by downloading the wheel file from a website which I don't remember and running this command in the command prompt: pip3 install opencv_python-3.2.0-cp35-cp35m-win32.whl;
I can now import cv2 ONLY if I'm on site-packages directory. If I get out of that folder (in CMD of course) I wont be able to import cv2 (getting a "no module found" message).
If i didnt expressed myself well, these are the commands I proceed to run to be able to import cv2 inside "site-packages" directory using CMD:
python
import cv2
If I try this in another directory, it doesn't work. The same if I create a .py file and try to import cv2 |
Error when executing `jupyter notebook` (No such file or directory) | 49,881,600 | 1 | 123 | 125,723 | 0 | python-3.x,jupyter-notebook | For me the fix was simply running pip install notebook
Somehow the original Jupiter install got borked along the way. | 0 | 1 | 0 | 0 | 2017-03-07T12:41:00.000 | 12 | 0.016665 | false | 42,648,610 | 1 | 0 | 0 | 6 | When I execute jupyter notebook in my virtual environment in Arch Linux, the following error occurred.
Error executing Jupyter command 'notebook': [Errno 2] No such file or directory
My Python version is 3.6, and my Jupyter version is 4.3.0
How can I resolve this issue? |
Error when executing `jupyter notebook` (No such file or directory) | 47,619,339 | 67 | 123 | 125,723 | 0 | python-3.x,jupyter-notebook | For me the issue was that the command jupyter notebook changed to jupyter-notebook after installation.
If that doesn't work, try python -m notebook, and if it opens, close it, then
export PATH=$PATH:~/.local/bin/, then refresh your path by opening a new terminal, and try jupyter notebook again.
And finally, if that doesn't work, take a look at vim /usr/local/bin/jupyter-notebook, vim /usr/local/bin/jupyter, vim /usr/local/bin/jupyter-lab (if you have JupyterLab) and edit the #!python version at the top of the file to match the version of python you are trying to use. As an example, I installed Python 3.8.2 on my mac, but those files still had the path to the 3.6 version, so I edited it to #!/Library/Frameworks/Python.framework/Versions/3.8/bin/python3 | 0 | 1 | 0 | 0 | 2017-03-07T12:41:00.000 | 12 | 1 | false | 42,648,610 | 1 | 0 | 0 | 6 | When I execute jupyter notebook in my virtual environment in Arch Linux, the following error occurred.
Error executing Jupyter command 'notebook': [Errno 2] No such file or directory
My Python version is 3.6, and my Jupyter version is 4.3.0
How can I resolve this issue? |
Error when executing `jupyter notebook` (No such file or directory) | 47,279,945 | 4 | 123 | 125,723 | 0 | python-3.x,jupyter-notebook | Since both pip and pip3.6 was installed and
pip install --upgrade --force-reinstall jupyter
was failing, so I used
pip3.6 install --upgrade --force-reinstall jupyter
and it worked for me.
Running jupyter notebook also worked after this installation. | 0 | 1 | 0 | 0 | 2017-03-07T12:41:00.000 | 12 | 0.066568 | false | 42,648,610 | 1 | 0 | 0 | 6 | When I execute jupyter notebook in my virtual environment in Arch Linux, the following error occurred.
Error executing Jupyter command 'notebook': [Errno 2] No such file or directory
My Python version is 3.6, and my Jupyter version is 4.3.0
How can I resolve this issue? |
Error when executing `jupyter notebook` (No such file or directory) | 53,039,574 | 5 | 123 | 125,723 | 0 | python-3.x,jupyter-notebook | Jupyter installation is not working on Mac Os
To run the jupyter notebook:-> python -m notebook | 0 | 1 | 0 | 0 | 2017-03-07T12:41:00.000 | 12 | 0.083141 | false | 42,648,610 | 1 | 0 | 0 | 6 | When I execute jupyter notebook in my virtual environment in Arch Linux, the following error occurred.
Error executing Jupyter command 'notebook': [Errno 2] No such file or directory
My Python version is 3.6, and my Jupyter version is 4.3.0
How can I resolve this issue? |
Error when executing `jupyter notebook` (No such file or directory) | 54,565,364 | 2 | 123 | 125,723 | 0 | python-3.x,jupyter-notebook | Deactivate your virtual environment if you are currently in;
Run following commands:
python -m pip install jupyter
jupyter notebook | 0 | 1 | 0 | 0 | 2017-03-07T12:41:00.000 | 12 | 0.033321 | false | 42,648,610 | 1 | 0 | 0 | 6 | When I execute jupyter notebook in my virtual environment in Arch Linux, the following error occurred.
Error executing Jupyter command 'notebook': [Errno 2] No such file or directory
My Python version is 3.6, and my Jupyter version is 4.3.0
How can I resolve this issue? |
Error when executing `jupyter notebook` (No such file or directory) | 50,421,008 | 0 | 123 | 125,723 | 0 | python-3.x,jupyter-notebook | I'm trying to get this going on VirtualBox on Ubuntu. Finally on some other post it said to try jupyter-notebook. I tried this and it told me to do sudo apt-get jupyter-notebook and that installed a bunch of stuff. Now if I type command jupyter-notebook, it works. | 0 | 1 | 0 | 0 | 2017-03-07T12:41:00.000 | 12 | 0 | false | 42,648,610 | 1 | 0 | 0 | 6 | When I execute jupyter notebook in my virtual environment in Arch Linux, the following error occurred.
Error executing Jupyter command 'notebook': [Errno 2] No such file or directory
My Python version is 3.6, and my Jupyter version is 4.3.0
How can I resolve this issue? |
From which directory to run python Script in webserver | 45,730,408 | 0 | 0 | 341 | 0 | php,python | if your python file and php file in same folder:
$command_to_run = "test.pyw $arg1 $arg2 , $output, $result";
$response = shell_exec($command_to_run);
else:
$command_to_run = "<php_folder>/scripts/python/test.pyw $arg1 $arg2 , $output, $result";
$response = shell_exec($command_to_run); | 0 | 1 | 0 | 1 | 2017-03-07T13:37:00.000 | 1 | 0 | false | 42,649,784 | 0 | 0 | 0 | 1 | I am trying to run a python script from a webserver(apache) using php. I used the following command
exec (python test.py $arg1 $arg2 , $output, $result)
It executes successfully when I put the test.py in the document root directory. However, I wanted to run the python script from another subdirectory so that it would be easy for me to manage the outout of the python script.
what the python script does is
creates a folder
copy a file from the same directory the python script resides into the folder created (1)
zip the folder
The document root and the subdirectory for the python script have the same permission.
since it keeps on looking for the files to be copied from the documentroot, it generates "no such file or directory" error (in the apache error log file) |
Toree Installation Issue | 46,944,308 | 0 | 3 | 1,230 | 0 | python,apache-spark,pip,apache-toree | my situation is similier to you,your jupyter client version is higher than the toree can find,try uninstalled jupyter client before. | 0 | 1 | 0 | 0 | 2017-03-08T02:23:00.000 | 3 | 0 | false | 42,661,846 | 1 | 0 | 0 | 1 | I wanted to pip install Toree package, but I ended up with the following error msg:
Could not find a version that satisfies the requirement toree (from
versions: ) No matching distribution found for toree
I couldn't find any documentation on requirements for toree. Also, pip doesn't seem to be the issue here either since it successfully installed other packages I tested.
Here are my systems:
1. Mac 10.11.16
2. Pip 9.0.1
3. Python 3.5 |