Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | When I run my unit tests I am getting UncompressableFileError for files installed through Bower. This happens because I don't run bower install in my unit tests and I don't want to have to run bower install for my unit tests.
Is there a way to disable django-compressor, or to mock the files so that this error doesn't happen?
I have COMPRESS_ENABLED set to False but no luck there, it still looks for the file. | 0 | python,django,unit-testing,django-compressor | 2016-04-28T21:04:00.000 | 0 | 36,925,440 | I think if you also set COMPRESS_PRECOMPILERS = () in your test-specific settings, that should fix your problem. | 0 | 99 | false | 1 | 1 | Django-Compressor throws UncompressableFileError on bower installed asset | 38,980,458 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | TL;DR
I would like to be able to check if a git repo (located on a shared network) was updated without using a git command. I was thinking checking one of the files located in the .git folder to do so, but I can't find the best file to check. Anyone have a suggestion on how to achieve this?
Why:
The reason why I need to do this is because I have many git repos located on a shared drive. From a python application I built, I synchronize the content of some of these git repo on a local drive on a lot of workstation and render nodes.
I don't want to use git because the git server is not powerful enough to support the amount of requests of all the computers in the studio would need to perform constantly.
This is why I ended up with the solution of putting the repos on the network server and syncing the repo content on a local cache on each computer using rsync
That works fine, but the as time goes by, the repos are getting larger and the rsync is taking too much time to perform. So I would like to be have to (ideally) check one file that would tell me if the local copy is out of sync with the network copy and perform the rsync only when they are out of sync.
Thanks | 0 | python,git | 2016-04-29T19:40:00.000 | 1 | 36,946,288 | Check the .git/FETCH_HEAD for the time stamp and the content.
Every time you fetch content its updating the content and the modification time of the file. | 0 | 63 | true | 0 | 1 | How to check if a git repo was updated without using a git command | 36,946,689 |
2 | 2 | 0 | 1 | 0 | 1 | 0.099668 | 0 | I installed xgboost in PythonAnywhere and it shows successful but when I import it in a script, an error is given which says, "No module xgboost found". What can be the reason? | 0 | python,pythonanywhere,xgboost | 2016-04-30T00:28:00.000 | 0 | 36,949,405 | You probably installed it for a version of Python that is different to the one that you're running. | 0 | 136 | false | 0 | 1 | Xgboost giving import error in pythonAnywhere | 36,953,546 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I installed xgboost in PythonAnywhere and it shows successful but when I import it in a script, an error is given which says, "No module xgboost found". What can be the reason? | 0 | python,pythonanywhere,xgboost | 2016-04-30T00:28:00.000 | 0 | 36,949,405 | In my case, I use Anaconda2 and installed xgboost through git. Everything was ok but I got this message while trying to use import xgboost:
No module xgboost found
When I run pip install xgboost I got the message that everything is ok and xgboost is installed.
I went on ../Anaconda2/Lib/site-package and saw a folder xgboost-0.6-py2.7.egg, and inside there was other one xgboost. I just copied this folder xgboost and pasted it inside ../Anaconda2/Lib/site-package. And now it works =) | 0 | 136 | false | 0 | 1 | Xgboost giving import error in pythonAnywhere | 46,962,994 |
1 | 2 | 0 | 0 | 3 | 1 | 0 | 0 | What is the $PYTHONPATH variable, and what's the significance in setting it?
Also, if I want to know the content of my current pythonpath, how do I find that out? | 0 | python,shell,path,environment-variables | 2016-04-30T17:14:00.000 | 0 | 36,957,843 | PYTHONPATH is the default search path for importing modules. If you use bash, you could type echo $PYTHONPATH to look at it. | 0 | 464 | false | 0 | 1 | Trying to understand the pythonpath variable | 36,957,901 |
1 | 1 | 0 | 3 | 1 | 0 | 1.2 | 0 | I have a simple problem: multiply a matrix by a vector. However, the implementation of the multiplication is complicated because the matrix is 18 gb (3000^2 by 500).
Some info:
The matrix is stored in HDF5 format. It's Matlab output. It's dense so no sparsity savings there.
I have to do this matrix multiplication roughly 2000 times over the course of my algorithm (MCMC Bayesian Inversion)
My program is a combination of Python and C, where the Python code handles most of the MCMC procedure: keeping track of the random walk, generating perturbations, checking MH Criteria, saving accepted proposals, monitoring the burnout, etc. The C code is simply compiled into a separate executable and called when I need to solve the forward (acoustic wave) problem. All communication between the Python and C is done via the file system. All this is to say I don't already have ctype stuff going on.
The C program is already parallelized using MPI, but I don't think that's an appropriate solution for this MV multiplication problem.
Our program is run mainly on linux, but occasionally on OSX and Windows. Cross-platform capabilities without too much headache is a must.
Right now I have a single-thread implementation where the python code reads in the matrix a few thousand lines at a time and performs the multiplication. However, this is a significant bottleneck for my program since it takes so darn long. I'd like to multithread it to speed it up a bit.
I'm trying to get an idea of whether it would be faster (computation-time-wise, not implementation time) for python to handle the multithreading and to continue to use numpy operations to do the multiplication, or to code an MV multiplication function with multithreading in C and bind it with ctypes.
I will likely do both and time them since shaving time off of an extremely long running program is important. I was wondering if anyone had encountered this situation before, though, and had any insight (or perhaps other suggestions?)
As a side question, I can only find algorithmic improvements for nxn matrices for m-v multiplication. Does anyone know of one that can be used on an mxn matrix? | 0 | python,c,multithreading,linear-algebra,hdf5 | 2016-04-30T19:59:00.000 | 0 | 36,959,589 | Hardware
As Sven Marnach wrote in the comments, your problem is most likely I/O bound since disk access is orders of magnitude slower than RAM access.
So the fastest way is probably to have a machine with enough memory to keep the whole matrix multiplication and the result in RAM. It would save lots of time if you read the matrix only once.
Replacing the harddisk with an SSD would also help, because that can read and write a lot faster.
Software
Barring that, for speeding up reads from disk, you could use the mmap module. This should help, especially once the OS figures out you're reading pieces of the same file over and over and starts to keep it in the cache.
Since the calculation can be done by row, you might benefit from using numpy in combination with a multiprocessing.Pool for that calculation. But only really if a single process cannot use all available disk read bandwith. | 1 | 342 | true | 0 | 1 | Efficient Matrix-Vector Multiplication: Multithreading directly in Python vs. using ctypes to bind a multithreaded C function | 36,959,985 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I have run into errors trying to pip install fabric, or paramiko (results in a pycrypto install RuntimeError: chmod error).
Is there a way to ssh from within a qpython script? | 0 | python,qpython | 2016-04-30T21:26:00.000 | 1 | 36,960,431 | U need a compiler to build the cryptography module, and it is not included. The best option is to get the cross compiler then build the module yourself. I don't see any prebuilt module for QPython about ssh/paramiko.
Maybe u can try out other libs, busybox/ssh or maybe dropbear for arm.
Update
I've take a proper look at the QPython modules, and both OpenSSL and SSH are preinstalled. You don't need to install them.
Still having problem with the Crypto module. I can't understand how much usefull is the ssh module without the Cryto one ... omg.
Update 2
Tried the Qpypi lib manager, found the cryptography on list, but at time of install didn't found it. Couldn't believe how much difficult is to put ssh to work with QPython. | 0 | 650 | false | 0 | 1 | Is there a way to ssh with qpython? | 42,102,991 |
1 | 2 | 0 | 9 | 9 | 0 | 1.2 | 0 | I'm working on a variscite board with a yocto distribution and python 2.7.3.
I get sometimes a Bus error message from the python interpreter.
My program runs normally at least some hours or days before the error ocours.
But when I get it once, I get it directly when I try to restart my program.
I have to reboot before the system works again.
My program uses only a serial port, a bit usb communication and some tcp sockets.
I can switch to another hardware and get the same problems.
I also used the python selftest with
python -c "from test import testall"
And I get errors for these two tests
test_getattr (test.test_builtin.BuiltinTest) ... ERROR test_nameprep
(test.test_codecs.NameprepTest) ... ERROR
And the selftest stops always at
test_callback_register_double (ctypes.test.test_callbacks.SampleCallbacksTestCase) ... Segmentation
fault
But when the systems runs some hours the selftests stops earlier at
ctypes.macholib.dyld
Bus error
I checked the RAM with memtester, it seems to be okay.
How I can find the cause for the problems? | 0 | python,linux,embedded | 2016-05-01T18:06:00.000 | 1 | 36,970,110 | Bus errors are generally caused by applications trying to access memory that hardware cannot physically address. In your case there is a segmentation fault which may cause dereferencing a bad pointer or something similar which leads to accessing a memory address which physically is not addressable. I'd start by root causing the segmentation fault first as the bus error is the secondary symptom. | 0 | 17,329 | true | 0 | 1 | How to determine the cause for "BUS-Error" | 37,817,521 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 1 | I have not seen any questions regarding to packet filter in Python and I am wondering, If it's possible to build it at all.
Is there any way building a custom firewall in Python? Null-routing specific IP's for example, or blocking them when request amount capacity is reached in 5 seconds.
What modules would it need, would it be extra difficult? Is Python useful for things like firewall?
Also would it be possible to add powerful protection? So it can filter packets on all the layers.
I'm not asking for script or exact tutorial to build it, my sorted question:
How possible would it be to build firewall in Python? Could I make it powerful enough to filter packets on all layers? would it be easy to build simple firewall? | 0 | python,network-programming | 2016-05-03T18:38:00.000 | 0 | 37,011,901 | Yes, that would be possible, Python has a large networking support (I would starting with the socket module, see the docs for that).
I would not say that it will be easy or build in a single weekend, but you should give it a try and spend some time on it! | 0 | 1,306 | false | 0 | 1 | Packet filter in Python? | 37,013,135 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | If I want to make a button in Kodi's menu and run a local python script upon clicking it, what's the best way to go about it? | 0 | python,customization,kodi | 2016-05-03T22:52:00.000 | 0 | 37,015,712 | file_path = xbmc.translatePath(os.path.join('insert path here to file you want to run'))
xbmc.executebuiltin("XBMC.RunScript("+file_path+")")
Very late reply but saw no one else had answered so though i'd put in just in case | 0 | 1,738 | true | 0 | 1 | How can I run a python script from within Kodi? | 40,060,071 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I wanted to find the inicial point of a stepper motor, if it exists, so I could rotate it always in 90 degrees or 512 steps (2048 steps for a full rotation). I've put four cups in the stepper motor and I want to use the degree 0 for cup 1, degree 90 for cup 2 and so on. I'm using it with Beaglebone Black with python language. So far I've only get to move the motor giving him a number of steps. I'm using the Adafruit_BBIO library to control GPIOs from Beaglebone.
Is it possible to get motor's initial position or move it to a inicial position? I've never used a stepper motor before.
Thank you. | 0 | python,beagleboneblack,gpio | 2016-05-04T21:20:00.000 | 0 | 37,038,234 | No - it is not possible to determine the exact position of a stepper motor without additional information (inputs). As you've noticed, you can only move a certain number of steps, but unless you know where you started, you won't know where you end up.
This is usually solved by using another input, typically a limit switch, at a known location such that the switch closes when the moving part is directly over that location. When you first start up, you rotate the stepper until the switch closes, at which point you know the current location. Once you have calibrated your initial position, THEN you can determine your exact position by counting steps (assuming your motor doesn't ever slip!)
You see this a lot with inkjet printers; when you first turn them on, the print head will slide all the way to one side (where there is almost certainly some sort of detector). That is the printer finding its zero point.
Some alternatives to a switch:
If you don't need full rotation, you can use a servo motor instead. These DO have internal position sensing.
Another hack solution using a stepper would be to place a mechanical block at one extremity that will prevent your mechanism from passing. Then just rotate the stepper one full revolution in a given direction. You know that at some point you will have hit the block and have stopped. This isn't great; you have to be careful that running into the stop won't damage anything or knock anything out of alignment. Due to the nature of steppers, your step count may also be off by up to 3 steps, so this won't be super high precision. | 0 | 1,571 | true | 0 | 1 | Stepper motor 28BYJ-48: How to find the angle 0? Or its initial point? | 37,057,732 |
1 | 1 | 0 | 2 | 0 | 0 | 0.379949 | 0 | Can I use mod_python.so and mod_wsgi.so at the same time on Apache Web Server defining different directories for each of them. At the moment I can not enable them both in my apache config file at the same time using LoadModule.
mod_wsgi for Django and mod_python for .py and .psp scripts. | 0 | wsgi,mod-python,mod | 2016-05-05T10:10:00.000 | 0 | 37,047,860 | For recent versions of mod_wsgi no you cannot load them at the same time as mod_wsgi will prevent it as mod_python thread code doesn't use Python C API for threads properly and causes various problems.
Short answer is that you shouldn't be using mod_python any more. Use a proper Python web framework with a more modern template system instead.
If for some reason you really don't want to do that, go back and use mod_wsgi 3.5. | 0 | 481 | false | 1 | 1 | Enable mod_python and mod_wsgi module | 37,048,554 |
1 | 2 | 0 | 4 | 0 | 1 | 1.2 | 0 | I am running a python script on my raspberry pi and I was just wondering if there is any command that I can use that counts how many lines are in my script.
Regards | 0 | python,raspberry-pi,raspbian,lines-of-code | 2016-05-06T07:27:00.000 | 0 | 37,066,703 | You can use the wc command:
wc -l yourScript.py | 0 | 571 | true | 0 | 1 | python script - counting lines of code in the script | 37,066,929 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 1 | I have python script for ssh which help to run various Linux commands on remote server using paramiko module. All the outputs are saved in text file, script is running properly. Now I wanted to run these script twice a day automatically at 11am and 5pm everyday.
How can I run these script automatically every day at given time without compiling every time manually. Is there any software or module.
Thanks for your help. | 0 | python | 2016-05-06T20:14:00.000 | 0 | 37,080,703 | Assuming you are running on a *nix system, cron is definitely a good option. If you are running a Linux system that uses systemd, you could try creating a timer unit. It is probably more work than cron, but it has some advantages.
I won't go though all the details here, but basically:
Create a service unit that runs your program.
Create a timer unit that activates the server unit at the prescribed times.
Start and enable the timer unit. | 0 | 1,567 | false | 0 | 1 | Automatically run python script twice a day | 37,081,699 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | What would be the best way to run a python script on the first of every month?
My situation is I want some data sent to a HipChat room, using the python api, on the first of every month from AWS. The data I want sent is in a text file in a S3 bucket | 0 | python-2.7,amazon-web-services,amazon-s3 | 2016-05-06T20:47:00.000 | 0 | 37,081,162 | Create a Lambda function and use cloudWatch ==> Events ==> Rules and configure it
using:
1:AWS built in timers
2:Cron Expressions
In your case cron is better option | 0 | 527 | false | 0 | 1 | Best way to run scheduled code on AWS | 37,085,603 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I am making an app which will login to a website and scrape the website for the information I have. I currently have the all the login and web scraping written in Python completely done. What I am trying to figure out is running that python code in Xcode in my swift project. I want to avoid setting up a server capable of executing cgi scripts. Essentially the user will input their credentials and I will pass that to the python file, and the script will run. | 0 | python,ios,swift | 2016-05-06T22:00:00.000 | 0 | 37,082,038 | Short answer: You don't.
There is no Python interpreter running on iOS, and Apple will likely neither provide nor allow one, since they don't allow you to deliver and run new code to in iOS app once it's installed. The code is supposed to be fixed at install time, and Python is an interpreted language. | 0 | 1,182 | false | 1 | 1 | How do I run python script within swift app? | 37,082,146 |
2 | 2 | 0 | -1 | 1 | 0 | -0.099668 | 0 | How to deactivate inline mode in your Bot? When you talk to BotFather by /help he doesn't give any instructions. Thanks | 0 | telegram-bot,python-telegram-bot | 2016-05-08T12:00:00.000 | 0 | 37,099,589 | Use /setinline and then /setnoinline command to disable inline mode. | 0 | 1,376 | false | 0 | 1 | How to deactivate inline mode in your Bot? | 37,102,384 |
2 | 2 | 0 | 2 | 1 | 0 | 0.197375 | 0 | How to deactivate inline mode in your Bot? When you talk to BotFather by /help he doesn't give any instructions. Thanks | 0 | telegram-bot,python-telegram-bot | 2016-05-08T12:00:00.000 | 0 | 37,099,589 | Type /setinline, choose bot to disable inline mode, that type /empty. This will disable inline mode in your bot. | 0 | 1,376 | false | 0 | 1 | How to deactivate inline mode in your Bot? | 60,238,809 |
1 | 1 | 0 | 3 | 1 | 0 | 1.2 | 0 | From reading Stackoverflow it seems that simply using a UUID for confirming registration via email is bad. Why is that? Why do I need to fancily generate some less random code from a user's data?
The suggested ways seem to be a variant of users data + salt -> hash. When a UUID is used, it always gets hashed. Why is that? There isn't anything to hide or obfuscate, right?
Sorry if this question is stupid.
Right now I am (prototyping) with Python3's UUID builtins. Is there something specially special about them? | 0 | python-3.x,cryptography,uuid,email-validation | 2016-05-08T20:14:00.000 | 0 | 37,104,377 | The point of email verification is, that someone with malicious intentions is prevented from registering arbitrary email addresses without having access to their respective inboxes (just consider a prankster who wants to sign up a target for daily cat facts, or more sinister signing up someone for a paid email newsletter or spamming their inbox, potentially their work inbox, with explicit content). Thus the confirmation code, which must be cryptographically secure. One important feature of a cryptographically secure confirmation code is, that it can not be predicted or guessed.
This is why UUIDs are not suitable: The main feature of UUIDs is, that a collision is astronomically unlikely. However the UUID generation algorithm is not designed to not be predictable. Typically a UUID is generated from the generating systems MAC address(es), the time of generation and a few bits of entropy. The MAC address and the time are well determined. The use of a PRNG that's fed simply by PID and time is also perfectly permissible. The whole point of UUIDs is to avoid collisions, not to make them unpredictable or unguessable. For that it suffices to have bits that are unique to the generating system (that never change) and a few bits that prevent this particular system from generating the same UUID twice simply by distributing UUIDs in time, the process generating it and the process internal state.
So if I know which system is going to generate a UUID, i.e. know its MAC addresses, the time at which the UUID is generated, there are only some extra 32 or so bits of entropy that randomize the UUID. And 32 bits simply doesn't cut it, security wise.
Assuming that a confirmation token is valid for 24 hours one can >100 confirmation requests per second and the UUID generator has 32 bits of extra randomness (in addition to time and MAC, which we assume as well known) this gives a 2% chance of finding a valid confirmation UUID.
Note that you can not "block" confirmation requests if too many invalid UUIDs are attempted per time interval, as this would effectively give an attacker a DoS tool to prevent legitimate users from confirming their email addresses (also including the email address into the confirmation request doesn't help; this just allows to target specific email addresses for a DoS). | 0 | 878 | true | 0 | 1 | Why can't an email confirmation code be a UUID? | 37,104,547 |
1 | 1 | 0 | 0 | 6 | 0 | 0 | 0 | In my php website, I call a python script using theano and running on GPU.
However, when calling this python script from php, it seems apache doesn't have any permissions on GPU so the program falls back on CPU, which is far less efficient compared to GPU.
How can I grant apache rights to run programs on GPU? | 0 | php,python,apache,gpu,theano | 2016-05-09T08:03:00.000 | 0 | 37,110,535 | I would split that up, save the requirement as event in some storage (redis for example or even rabbitmq) and listen to that with some daemonized script (cron would be a bad joice since its hard to make it run more often than every minute). The script will update the storage entry with the results and you can access it again in your http stack. You can implement the functionallity via ajax or utilize a usleep command in php to wait for the results. If using a while loop, dont forget to break it after 1 second or something, so the request is not running too long.
Your problem might be the configured user, that executes the php binary - it maybe not permitted to access those binaries on your system. Typically its the www-data user. By adding the www-data user to the necessary group, you might be able to solve it without splitting all up. Have a look at the binary's ownerships and permissions to figure that out. | 0 | 757 | false | 0 | 1 | How to enable php run python script on GPU? | 44,941,997 |
2 | 3 | 0 | 0 | 0 | 0 | 1.2 | 0 | I want to transfer unicode into asci characters, transfer them through a channel that only accepts asci characters and then transform them back into proper unicode.
I'm dealing with the unicode characters like ɑ in Python 3.5.
ord("ɑ") gives me 63 with is the same as what ord("?") also gives me 63. This means simply using ord() and chr() doesn't work. How do I get the right conversion? | 0 | python,unicode | 2016-05-09T15:48:00.000 | 0 | 37,120,088 | I found my error. I used Python via the Windows console and the Windows console mishandeled the unicode. | 0 | 381 | true | 0 | 1 | Is there something like ord() in python that gives the unicode hex? | 37,120,499 |
2 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | I want to transfer unicode into asci characters, transfer them through a channel that only accepts asci characters and then transform them back into proper unicode.
I'm dealing with the unicode characters like ɑ in Python 3.5.
ord("ɑ") gives me 63 with is the same as what ord("?") also gives me 63. This means simply using ord() and chr() doesn't work. How do I get the right conversion? | 0 | python,unicode | 2016-05-09T15:48:00.000 | 0 | 37,120,088 | You can convert a number to a hex string with "0x%x" %255 where 255 would be the number you want to convert to hex.
To do this with ord, you could do "0x%x" %ord("a") or whatever character you want.
You can remove the 0x part of the string if you don't need it. If you want to hex to be capitalized (A-F) use "0x%X" %ord("a") | 0 | 381 | false | 0 | 1 | Is there something like ord() in python that gives the unicode hex? | 37,120,442 |
1 | 1 | 0 | 3 | 4 | 1 | 1.2 | 0 | I'm trying to get used to Python and VMware vSphere API Python Bindings (pyVmomi). I try to understand the purpose of each component. What's the purpose of pyVim within pyVmomi? From what I understand, pyVim is used for connection handling (creation, deletion...) to the Virtualization Management Object Management Infrastructure (VMOMI). Is this correct?
Thank you & best regards,
Patrick | 0 | python,vmware,pyvmomi | 2016-05-10T19:08:00.000 | 0 | 37,146,986 | That is correct. Also as of recently some new Task handling functionality has been added to pyVim as well. The new task stuff abstract out making property collectors to monitor task progress and such. The connection classes provided allow various authentication methods supported by vSphere such as basic auth, SSPI, and a few others. It also handles disconnecting and cleaning up connections once closed. The VMOMI classes from pyVmomi are the objects inside vSphere like HostSystem, VirtualMachines, Network, etc. | 0 | 5,088 | true | 0 | 1 | Purpose of pyVim within pyVmomi | 37,168,227 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | Is someone out there familiar with IronPython internals, specifically with PythonAst and LanguageContext classes ?
My application does compile a Python script source and then look into PythonAst to find variables. While I can successfully find global variables, I am unable to get functions' local variables. Is it possible somehow ?
Another question would be to also find the current type of a variable as it can be inferred from the compiled code as well as its current value ?
After a script was executed I can use the ScriptScope structure, or at debug time I can parse a debug frame for variables and theirs value, but I would like to do it at compile time also, as the user constructs the code. Is this possible at all ?
Thanks. | 0 | python,ironpython | 2016-05-11T10:31:00.000 | 0 | 37,159,990 | In the meantime I got help on another forum and found a solution. It is basically goes like this :
A FunctionDefinition has a body which will most likely be a SuiteStatement (which is just a collection of statements). Local variables will be defined with AssignmentStatements, where the Left side is an array of NameExpressions. From there you can figure out what the locals are, by gathering all of the NameExpressions. | 0 | 61 | true | 0 | 1 | PythonAst and LanguageContext | 37,422,119 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I looked around the web, couldn't really find, guess I'm searching wrong.
I try to import a file I built.
In cmd to use it I used a cd command and and just used it.
In shell it keeps on telling me:
[ Traceback (most recent call last): File "", line 1, in
from ch09 import * ImportError: No module named 'ch09' ]
(Im just learning python my self hence ch09)
please if someone can help me with this, even both in cmd not to use cd, though it fine, but more important in shell).
Thanks, Josh. | 0 | shell,python-3.x,import | 2016-05-12T11:55:00.000 | 1 | 37,186,159 | You have to be in the directory in which the file is located in order to import it from another script or from the interactive shell.
So you should either put the script trying to import ch09 in the same folder as ch09.py or you should use os.chdir to cd to the directory internally. | 0 | 46 | false | 0 | 1 | #python3.x Importing in Shell | 43,708,606 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm trying to use a PHP site button to kick off a python script on my server. When I run it, everything seems fine, on the server I can "ps ax" and see that the script is running.
The Python script attempts to process some files and write the results to a MySQL database. When I ultimately check to see that the changes were made to the DB, nothing has happened. Also, redirecting output shows no errors.
I have checked to make sure that it's executing (the ps ax)
I've made sure that all users have access to writing to the output directory (for saving the error report, if there is one)
I've made sure that the logon to MySql is correct...
I'm not sure what else to do or check. | 1 | php,python,mysql | 2016-05-13T15:10:00.000 | 0 | 37,213,568 | You can check in your web-server logs.(/var/www/log/apache2/error.log) if you have apache as your webserver.. | 0 | 40 | false | 0 | 1 | Running a python (with MySQL) script from PHP | 37,213,822 |
2 | 2 | 0 | 5 | 4 | 0 | 0.462117 | 1 | I'm new to Python. Currently I'm using it to connect to remote servers via ssh, do some stuff (including copying files), then close the session.
After each session I do ssh.close() and sftp.close(). I'm just doing this because that's the typical way I found on the internet.
I'm wondering what would happen if I just finishes my script without closing the session. Will that affect the server? Will this make some kind of (or even very little) load on the server? I mean why we are doing this in the first place? | 0 | python,session,ssh,sftp,conceptual | 2016-05-18T06:36:00.000 | 0 | 37,291,961 | The (local) operating system closes any pending TCP/IP connection opened by the process, when the process closes (even if it merely crashes).
So in the end the SSH session is closed, even if you do not close it. Obviously, it's closed abruptly, without proper SSH cleanup. So it may trigger some warning in the server log.
Closing the session is particularly important, when the process is long running, but the session itself is used only shortly.
Anyway, it's a good practice to close the session no matter what. | 0 | 1,588 | false | 0 | 1 | What would happen if I don't close an ssh session? | 37,292,169 |
2 | 2 | 0 | 3 | 4 | 0 | 0.291313 | 1 | I'm new to Python. Currently I'm using it to connect to remote servers via ssh, do some stuff (including copying files), then close the session.
After each session I do ssh.close() and sftp.close(). I'm just doing this because that's the typical way I found on the internet.
I'm wondering what would happen if I just finishes my script without closing the session. Will that affect the server? Will this make some kind of (or even very little) load on the server? I mean why we are doing this in the first place? | 0 | python,session,ssh,sftp,conceptual | 2016-05-18T06:36:00.000 | 0 | 37,291,961 | We close session after use so that the clean up(closing of all running processes associated to it) is done correctly/easily.
When you ssh.close() it generates a SIGHUP signal. This signal kills all the tasks/processes under the terminal automatically/instantly.
When you abruptly end the session that is without the close(), the OS eventually gets to know that the connection is lost/disconnected and initiates the same SIGHUP signal which closes most open processes/sub-processes.
Even with all that there are possible issues like few processes continue running even after SIGHUP because they were started with a nohup option(or have somehow been disassociated from the current session). | 0 | 1,588 | false | 0 | 1 | What would happen if I don't close an ssh session? | 37,292,405 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm using happybase to access HBase. However, the only parameter I need is the host name. How does Thrift work without authentication? How do I add security to my code? | 0 | python,python-3.x,hbase,thrift,happybase | 2016-05-19T01:22:00.000 | 0 | 37,312,465 | thrift servers are generally only run in a trusted network. that said, thrift can run over ssl but support in happybase is limited because no one stepped up to properly design and implement an api for it. feel free to contribute. | 0 | 410 | false | 1 | 1 | How do I add authentication/security to access HBase using happybase? | 40,366,307 |
1 | 2 | 0 | 3 | 7 | 0 | 1.2 | 0 | The Jenkins ShiningPanda plugin provides a Managers Jenkins - Configure System setting for Python installations... which includes the ability to Install automatically. This should allow me to automatically setup Python on my slaves.
But I'm having trouble figuring out how to use it. When I use the Add Installer drop down it gives me the ability to
Extract .zip/.tar.gz
Run Batch Command
Run Shell Command
But I can't figure out how people us these options to install Python. Especially as I need to install Python on Windows, Mac, & Linux.
Other Plugins like Ant provide an Ant installations... which installs Ant automatically. Is this possible with Python? | 0 | python,plugins,jenkins | 2016-05-19T16:12:00.000 | 1 | 37,328,773 | As far as my experiments for jenkins and python goes, shining panda plug-in doesn't install python in slave machines in fact it uses the existing python library set in the jenkins configuration to run python commands.
In order to install python on slaves, I would recommend to use python virtual environment which comes along with shining panda and allows to run the python commands and then close the virtual environment. | 0 | 4,515 | true | 0 | 1 | How to configure the Jenkins ShiningPanda plugin Python Installations | 37,616,786 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | I have a Blender file called Assets.blend containing over 100 objects for a game I'm developing in Unity.
When ever I make modifications, I run a script that exports each root object as a separate fbx file.
However I have no way of detecting which ones have been updated, so every time I have to re-export every single object even though I've only created/modified 1.
The time it takes to run the script is about 10 seconds, but then Unity detects the changes and spends over 30 seconds processing mostly unchanged prefabs.
How can I improve my script so that it knows which objects have been altered since the last export?
There does not appear to be any date_modified variable for objects or meshes. | 0 | python,unity3d,blender | 2016-05-20T19:16:00.000 | 0 | 37,354,292 | Another approach is to compute a CRC-like signature on meaningful values (mesh geometry, materials, whatever it is you change often) and store that somewhere (in each object as a custom property, for instance).
Then you can easily skip objects whose signatures did not change since last export. | 0 | 649 | false | 0 | 1 | Detect changes in Blender object for more efficient export script | 39,065,689 |
1 | 1 | 0 | 1 | 1 | 1 | 0.197375 | 0 | I've installed python, and added it's path to system variables "C:\Python27" but when typing "python" to powershell, I get error mentioned in title. I also can't run it from cmd.
And yes, my python folder is in c directory. | 0 | python,installation | 2016-05-21T10:54:00.000 | 1 | 37,361,976 | Type .\python.exe .\
PS C:\Python27> .\python.exe .\PRACTICE.py
Typing only python will not get recognized by windows. | 0 | 4,380 | false | 0 | 1 | The term 'python' is not recognized as the name of a cmdlet | 47,179,582 |
1 | 1 | 0 | 3 | 2 | 1 | 1.2 | 0 | I'm working on transitioning a project from scons to autotools, since I it seems to automatically generate a lot of features that are annoying to write in a SConscript (e.g. make uninstall).
The project is mostly c++ based, but also includes some modules that have been written in python. After a lot of reading autotools, I can finally create a shared library, compile and link an executable against it, and install c++ header files. Lovely. Now comes they python part. By including AM_PYTHON_PATH in configure.ac, I'm also installing the python modules with Makefile.am files such as
autopy_PYTHON=autopy/__init__.py autopy/noindent.py autopy/auto.py
submoda_PYTHON=autopy/submoda/moda.py autopy/submoda/modb.py autopy/submoda/modc.py autopy/submoda/__init__.py
submodb_PYTHON=autopy/submodb/moda.py autopy/submodb/modb.py autopy/submodb/modc.py autopy/submodb/__init__.py
autopydir=$(pythondir)/autopy
submodadir=$(pythondir)/submoda
submodbdir=$(pythondir)/submodb
dist_bin_SCRIPTS=scripts/script1 scripts/script2 scripts/script3
This seems to place all my modules and scripts in the appropriate locations, but I wonder if this is "correct", because the way to install python modules seems to be through a setup.py script via distutils. I do have setup.py scripts inside the python modules and scons was invoking them until I marched in with autotools. Is one method preferred over the other? Should I still be using setup.py when I build with autotools? I'd like to understand how people usually resolve builds with c++ and python modules using autotools. I've got plenty of other autotools questions, but I'll save those for later. | 0 | python,c++,autotools,automake,distutils | 2016-05-21T22:00:00.000 | 1 | 37,368,441 | Based on your description, I would suggest that you have your project built using stock autotools-generated configure and Makefile, i.e. autoconf and automake, and have either your configure or your Makefile take care of executing your setup.py, in order to set up your Python bits.
I have a project that's mostly C/C++ code, together with a Perl module. This is very similar to what you're trying to do, except that it's Perl instead of Python.
In my Makefile (generated from Makefile.am) I have a target that executes the Perl module's Makefile.PL, which is analogous to Python's setup.py, and in that manner I build the Perl module together with the rest of the C++ code, seamlessly together, as a single build. Works fairly well.
automake's Makefile.am is very open-ended and flexible, and can be easily adapted and extended to incorporate foreign bits, like these. | 0 | 580 | true | 0 | 1 | Are there any disadvantages in using a Makefile.am over setup.py? | 37,369,042 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I practice TDD but I have not used mocking before.
Suppose I want to build a function that should create a folder, but only if that folder does not already exist. As part of my TDD cycle I first want to create a test to see that my function won’t delete an already existing folder.
As my function will probably use os.rm, I gather I could use mocking to see whether os.rm has been called or not. But this isn’t very satisfactory as there are many ways to delete folders. What if I change my function later on to use shutil.rmtree? os.rm would not have been called, but perhaps the function now incorrectly removes the folder.
Is it possible to use mocking in a way which is insensitive to the method? (without actually creating files on my machine and seeing whether they are deleted or not - what I have been doing until now) | 0 | python,unit-testing,mocking | 2016-05-24T06:46:00.000 | 0 | 37,406,227 | The problem of "mockism" is that tests bind your code to a particular implementation. Once you have decided to test for a particular function call you have to call (or not as in your example) that function in your production code.
As you have already noticed, there is plenty of ways to remove the directory (even by running rm -rf as external process).
I think the way you are doing it already is the best - you check for an actual side-effect you are interested, no matter how it has been generated.
If you are wondering about performance, you may try to make that test optional, and run it less frequently than the rest of your test suite. | 0 | 695 | false | 0 | 1 | Python mocking delete | 37,407,120 |
1 | 2 | 0 | 1 | 3 | 0 | 0.099668 | 0 | I'm working on a project, which I am not at liberty to discuss the core, but I have reached a stumbling block. I need data to be transferred from C++ to some other language, preferably Java or Python, in realtime (~10ms latency).
We have a sensor that HAS to be parsed in C++. We are planning on doing a data read/output through bluetooth, most likely Java or C# (I don't quite know C#, but it seems similar to Java). C++ will not fit the bill, since I do not feel advanced enough to use it for what we need. The sensor parsing is already finished. The data transferring will be happening on the same machine.
Here are the methods I've pondered:
We tried using MatLab with whatever the Mex stuff is (I don't do MatLab) to access functions from our C++ program, to retrieve the data as an array. Matlab will be too slow (we read somewhere that the TX/RX will be limited to 1-20 Hz.)
Writing the data to a text, or other equivalent raw data, file constantly, and opening it with the other language as necessary.
I attempted to look this up, but nothing of use showed in the results. | 0 | java,python,c++,pipelining | 2016-05-26T23:39:00.000 | 1 | 37,472,688 | We had same issue where we had to share sensor data between one Java app to other multiple apps including Java,Python and R.
First we tried Socket connections but socket communication were not fault tolerant. Restarting or failure in one app affected other.
Then we tried RMI calls between them but again we were unhappy due to scalability.
We wanted system to be reliable, scalable, distributed and fault tolerant. So, finally we started using RabbitMQ where we created one producer and multiple consumers. It worked well for 2 years. you may consider using Apache Kafka.
You have options like Socket pipes, RMI calls, RabbitMQ, Kafka, Redis based on your system requirements now and in near future. | 0 | 150 | false | 0 | 1 | Pipelining or Otherwise Transferring Data Between Languages in Realtime | 37,495,749 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I'm looking for an efficient method to find all the roots of a function f on an interval [a,b].
The problem I have is that all the nice methods from scipy.optimize require either that f(a) and f(b) have different signs, or that I provide an initial guess x0, but I know nothing about my roots before running the code.
Note: The function f is smooth (at least C1), and doesn't have a pathological behaviour [nothing like sin(1/x)]. However, it requires building a matrix A(x) and finding its eigenvalues, and is therefore time-consuming. It is expected to have between 0 and 10 roots on [a,b], whose position is completely arbitrary. I can't afford missing any of them (e.g. I can't take 100 initial guesses x0 and just hope that i'll catch all the roots).
I was thinking about implementing something like this:
Find all the extrema {m_1, m_2.., m_k} of f with scipy.optimize [maybe fmin, but I don't know which method is the most efficient]:
Search for a minimum m_1 starting from point a [initial guess for gradient algorithm]
Search for a maximum m_2 starting from point m_1 + dx [forcing the gradient algorithm to go forward]
Search for a minimum m_3...
If two consecutive extrema m_i and m_(i+1) have opposite signs, apply brentq on interval [m_i, m_(i+1)] to find a root.
Is there a better way of solving this problem?
If not, are fmin and brentq the best choices among the scipy.optimize library in order to minimize the number of calls to my function f? | 0 | python,optimization,scipy | 2016-05-28T21:51:00.000 | 0 | 37,504,035 | Depends on your function, but it might be possible to solve symbolically using SymPy. That would give all roots. It can find eigenvalues symbolically if necessary.
Finding all extrema is the same as finding all roots of the derivative of your function, so it won't be any easier than finding all roots (as WarrenWeckesser mentioned).
Finding all roots numerically will require using knowledge about the function. As a simple example, say you knew some minimum spacing between roots. You could try to recursively partition the interval and find roots in each. Stop after finding the maximum number of roots. But if the spacing is small, this could require many function evaluations (e.g. in the worst case when there are zero roots). The more constraints you can impose, the more you can cut down on function evaluations. | 1 | 1,874 | false | 0 | 1 | Finding multiple roots on an interval with Python | 37,506,002 |
1 | 1 | 0 | 3 | 1 | 0 | 1.2 | 0 | I'm new to Tornado, and working on a project that involves some rather complex routing. In most of the other frameworks I've used I've been able to isolate routing for testing, without spinning up a server or doing anything terribly complex. I'd prefer to use pytest as my testing framework, but I'm not sure it matters.
Is there a way to, say, create my project's instance of tornado.web.Application, and pass it arbitrary paths and assert which RequestHandler will be invoked based on that path? | 0 | python,python-3.x,tornado,pytest | 2016-05-28T23:09:00.000 | 1 | 37,504,566 | No, it is not currently possible to test this in Tornado via any public interface (as of Tornado version 4.3).
It's straightforward to avoid spinning up a server, although it requires a nontrivial amount of code: the interface between HTTPServer and Application is well-defined and documented. The trickier part is the other side: there is no supported way to determine which handler will be invoked before that handler is invoked.
I generally recommend testing routing via end-to-end tests for this reason. You could also store your URL route list before passing it into Tornado, and do your tests against that - the internal logic of "take the first regex match" is pretty easy to replicate. | 0 | 129 | true | 1 | 1 | Route testing with Tornado | 37,504,714 |
2 | 5 | 0 | 1 | 10 | 1 | 0.039979 | 0 | I have been using the gTTS module for python 3.4 to make mp3 files of spoken text. It has been working, but all of the speech is in a certain adult female voice. Is there a way to customize the voice that gTTS reads the text in? | 0 | python,python-3.x,text-to-speech,google-text-to-speech | 2016-06-02T19:06:00.000 | 0 | 37,600,197 | May be possible to pass the gTTS output through Audacity and apply a change to a male-sounding voice? gTTS I have just got going has a very good female voice, but the engine fails to read well on long sentences or unexpected words. Still, it's the best I found for free so far, and actually is better than all the other free ones, and a good deal of the pay ones. I just had to work out the py scripts and how to use python and later learned Anaconda is a miracle cure to what ails you. Got my systems terminal and the pip to install gTTS properly, which I could not do prior to Anaconda. Scripts made by people for 3.+ now run without errors trying run them in default py v2.7. The terminal is now env 3.6.8 but also all the old py scripts still run fine. | 0 | 35,163 | false | 0 | 1 | Custom Python gTTS voice | 57,925,186 |
2 | 5 | 0 | 2 | 10 | 1 | 0.07983 | 0 | I have been using the gTTS module for python 3.4 to make mp3 files of spoken text. It has been working, but all of the speech is in a certain adult female voice. Is there a way to customize the voice that gTTS reads the text in? | 0 | python,python-3.x,text-to-speech,google-text-to-speech | 2016-06-02T19:06:00.000 | 0 | 37,600,197 | If you call "gtts-cli --all" from a command prompt, you can see that gTTS actually supports a lot of voices. However, you can only change the accents, and not the gender. | 0 | 35,163 | false | 0 | 1 | Custom Python gTTS voice | 64,368,114 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 0 | I'm trying to figure out how Gevent works with respect to other asynchronous frameworks in python, like Twisted.
The key difference between Gevent and Twisted is that Gevent uses greenlets and monkey patching the standard library for an implicit behavior and a synchronous programming model whereas Twisted requires specific libraries and callbacks for an explicit behavior. The event loop in Gevent is libev/libevent, which is written in C, and the event loop in Twisted is the reactor, which is written in python.
Is there anything special about libev/libevent that allows for this implicit behavior? Why not use an event loop written in Python? Conversely, why isn't Twisted using libev/libevent? Is there any particular reason? Maybe it was simply a design choice and could have gone either way...
Theoretically, can Gevent's libev be replaced with another event loop, written in python, like Twisted's reactor? And can Twisted's reactor be replaced with libev? | 0 | python,events,asynchronous,twisted,gevent | 2016-06-04T10:52:00.000 | 1 | 37,629,312 | Short answer: Twisted is a network framework. Gevent tries to act as a library without requiring from the programmer to change the way he programs. That's their focus.. and not so much how that is achieved under the hood.
Long answer:
All asyncio libraries (Gevent, Asyncio, etc.) work pretty much the same:
Have a main loop running endlessly on a single thread.
When an event occurs, it's captured by the main loop.
The main loop decides based on different rules (scheduling) if it should continue checking for events or switch temporarily and give control to any subscriber functions to the event.
greenlet is a different library. It's very simple in that it just changes the order that Python code is run and lets you change jumping back and forth between functions. Gevent uses it under the hood to implement its async features.
asyncio which comes with Python3 is like gevent. The big difference is the interface again. It requires the programmer to mark functions with async and allow him to explicitly wait for a subscribed function in the main loop with await.
Gevent is like asyncio. But instead of the keywords it patches existing code where appropriate. It uses greenlet under the hood to switch between main loop and subscribed functions and make it all work seamlessly.
Twisted as mentioned feels more like a framework than a library. It requires the programmer to follow very specific ways to achieve concurrency. Again though it has a main loop under the hood called reactor like everything else.
Back to your initial question: You can in theory replace the reactor with any loop (including gevent). But that would defeat the purpose. Probably Twisted's team decided to use their own version of a main loop for optimisation reasons. All these libraries use different scheduling in their main loops to meet their needs. | 0 | 318 | false | 0 | 1 | Gevent's libev, and Twisted's reactor | 71,033,066 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I am looking for a way to delete elements from abaqus inp. The analysis type is dynamic explicit and elements are S4R.
I should notice that elements which should be deleted are updated in a matlab optimization cycle.
Is there any way except using subroutine VUMAT?(even python scripting is preferred)
any idea will appreciated. | 0 | python,abaqus | 2016-06-04T12:08:00.000 | 0 | 37,630,059 | Thank you for reply. I tried your advice, I deleted the elements that I wanted to be removed from *.inp file. But when I imported the the *.inp, ABAQUS did not accept the file and gave error.
What I understood from your answer is that If I could not change *.inp file manually, It would not be possible to make change by python.
Excuse me if I did not explain clearly. I think It is better to ask my question in this way:
I have an inp file containing a crash box simulation and dynamic explicit analysis applied on it.
I want this tube have some void elements before analysis. So I should manipulate the *.inp file. (This FE model will be used for topology optimization purpose in matlab) | 0 | 430 | false | 0 | 1 | changing inp by deleting element | 37,686,072 |
1 | 1 | 0 | 0 | 1 | 1 | 1.2 | 0 | I am using Ubuntu 14.04
I have installed a module pymavlink using sudo pip install pymavlink
now when i run a code like
python code.py
it says no module names as pymavlink but when i run it as
sudo python code.py
it works fine, i don't understand whats the problem without sudo.
Also i have Python 2.7 and python 3 installed as they both came with Ubuntu.
can someone please let me know the fix for this. | 0 | python,python-2.7 | 2016-06-05T13:27:00.000 | 0 | 37,642,462 | I found the solution for the problem, it had permission issues, normal user didn't had the permission to execute the command, hence i added it by running the command
sudo chmod 775 python2.7
and same inside for its sub folders as well.
sudo chmod 775 *
and now its working fine, i can import everything i install from pip or sudo pip | 0 | 1,996 | true | 0 | 1 | python module import error without sudo | 37,642,876 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | The code in crontab 0 * * * * cd /home/scrapy/foo/ && scrapy crawl foo >> /var/log/foo.log
It failed to run the crawl, as there was no log in my log file.
I tested using 0 * * * * cd /home/scrapy/foo/ && pwd >> /var/log/foo.log, it echoed '/home/scrapy/foo' in log.
I also tried PATH=/usr/local/bin and PATH=/usr/bin, but no success.
I'm able to run it manually by typing cd /home/scrapy/foo/ && scrapy crawl foo in command line.
Any thoughts? Thanks. | 0 | python,cron,scrapy,crontab | 2016-06-07T05:06:00.000 | 0 | 37,670,895 | Problem solved. Rather than running the crawl as root, use crontab -u user -e to create a crontab for user, and run as user. | 0 | 108 | true | 1 | 1 | cron couldn't run Scrapy | 37,671,502 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | I am multiplying many large numbers and finally taking modulo of it. To optimise this I am using MOD at each step. But I also want the 1st digit of the final answer. Is there any way to know that even after using MOD?
Or is there any other efficient way to do huge multiplication many times, get the final answer and extract the 1st digit from it?
Order of elements is 10^9 and number of multiplications is about 10^5 | 0 | python,math,multiplication | 2016-06-08T12:52:00.000 | 0 | 37,703,061 | take 10 based logarithms, sum them up and take the fractional part of the sum.
think about scientific notation of large numbers. | 0 | 239 | true | 0 | 1 | 1st digit before taking modulo(10**9 + 7) | 37,703,548 |
1 | 3 | 0 | 0 | 0 | 1 | 0 | 0 | Issue:
When calling the function time.time() I notice it jumping about 30 seconds after reboot. By jumping I mean it changes its return value by about 40 seconds instantly.
Setup:
I am running my script on a Raspberry Pi 3B, immeadiately after reboot. The issue does not occur when ran later.
Question:
Why does that occur? I suspect the Raspberry of changing its System clock at some point after reboot through WiFi. May that be the issue? I do not think posting code is helpful, as it really is a question related to the time.time() function. | 0 | python,time,raspberry-pi,clock | 2016-06-08T15:11:00.000 | 0 | 37,706,457 | Check wheather it ajusts clock from external source like NPT. | 0 | 360 | false | 0 | 1 | time.time() values jumping | 37,706,564 |
3 | 3 | 0 | 2 | 3 | 0 | 0.132549 | 0 | I am writing a Python program which uses an API Key to access data from an external service. The program makes a call to this service with the key hard coded in my Python script. Is there a method of somewhat protecting this key (not something that is irreversible, but a method preventing people copying the key straight out of the script)? Should the program request the key from my server? Perhaps a C library? | 0 | python,security,obfuscation,api-key | 2016-06-09T11:23:00.000 | 0 | 37,724,590 | Do not know the architecture of your code, maybe you can use following architecture:
A -> B -> C
A is your client code, A submit request to B
B is proxy code which is on your private server, and already coding your api_key in the B code, the B code will transfer your A request with the api_key to C
C is the external service
Then, the client code will never record the api key, the api key will be on your private server, this is something like proxy design pattern.
Of course, if you do not want so complex, you can just use Py2exe to package your python code to exe, this is another option, FYI. | 0 | 4,586 | false | 0 | 1 | Protecting an API key in Python | 37,724,974 |
3 | 3 | 0 | 2 | 3 | 0 | 1.2 | 0 | I am writing a Python program which uses an API Key to access data from an external service. The program makes a call to this service with the key hard coded in my Python script. Is there a method of somewhat protecting this key (not something that is irreversible, but a method preventing people copying the key straight out of the script)? Should the program request the key from my server? Perhaps a C library? | 0 | python,security,obfuscation,api-key | 2016-06-09T11:23:00.000 | 0 | 37,724,590 | If you want to protect yourself from "Hackers", it is impossible, since if python script has access to your API, then this same script can be modified to do nasty things with the access it possesses. You will have to find another solution there.
If you want to protect yourself from "shoulder surfers" (People who look at your monitor while they pass by), then base64.b64encode("key") and base64.b64decode("a2V5==") should be enough. | 0 | 4,586 | true | 0 | 1 | Protecting an API key in Python | 37,724,867 |
3 | 3 | 0 | 2 | 3 | 0 | 0.132549 | 0 | I am writing a Python program which uses an API Key to access data from an external service. The program makes a call to this service with the key hard coded in my Python script. Is there a method of somewhat protecting this key (not something that is irreversible, but a method preventing people copying the key straight out of the script)? Should the program request the key from my server? Perhaps a C library? | 0 | python,security,obfuscation,api-key | 2016-06-09T11:23:00.000 | 0 | 37,724,590 | Should the program request the key from my server?
Even then a highly motivated (or skilled, or both...) user will be able to get the key by sniffing tools such as Wireshark (if you aren't using https), or even by modifying your script by simply adding a print somewhere. | 0 | 4,586 | false | 0 | 1 | Protecting an API key in Python | 37,724,762 |
1 | 2 | 0 | -2 | 3 | 1 | -0.197375 | 0 | I've been using IDLE with my raspberry for a while, it's nice at the beginning, but Pycharm provides lots more of features and I'm used to them since I've been also using Android Studio.
The problem is I couldn't figure out how to install the RPi module to control the pins of my Raspberry. Does anyone know how to do this?
In case it matters, it's python3 on a raspberry 2B. | 0 | python,raspberry-pi,pycharm | 2016-06-09T17:48:00.000 | 0 | 37,732,918 | You can run Pycharm directly on your Raspberry Pi:
- Using your Raspberry Pi, download the installation file directly from the Pycharm website (JetBrains). It will be a tarball, i.e., a file ending in ".tar.gz".
- Extract the file to a folder of your choice.
- Browsing through the extracted files and folders, you will find a folder named "bin". Inside "bin" you will find a file named Pycharm.sh
- Using your terminal window, go to the "bin" folder and launch the Pycharm application by typing: sudo ./Pycharm.sh
After several seconds (it's a little slow to load on my RPi3), Pycharm will load. Have fun! | 0 | 18,522 | false | 0 | 1 | Install RPi module on Pycharm | 40,196,547 |
1 | 2 | 0 | -1 | 1 | 1 | -0.099668 | 0 | I'm in the process of trying to engineer a data structure for a game engine, and allow a scripting language to grab data from it. Due to some limitations of design, the data would need to be stored on the C++ side of the program in a database like structure. Main reason being that I'm not sure if Python's serialization base can compensate for modders suddenly adding and removing data fields.
I am wondering if is possible to call a python script, and have it act as it's own object with it's own data? If not, can you instantiate a python class from C++ without knowing the class's name until runtime? | 0 | python,c++,database,game-engine | 2016-06-10T05:16:00.000 | 0 | 37,740,394 | I never worked with python. But I think this is one of the main features of any programming/script languages: Call a function multiple times with it's own instances as many times as you need. | 0 | 1,053 | false | 0 | 1 | Can Python run multiple instances of a script with each instance containing it's own data? | 37,740,518 |
1 | 2 | 0 | 2 | 1 | 1 | 0.197375 | 0 | I have the following scenario:
given a Python application on some client machine which enables several users. It encrypts and decrypts user passwords. What would be the currently most recommended approach?
Attempts of using PyNaCl lead to the insight that it is not a good approach due to the fact that PyNaCl is used for communication encryption and decryption. Here we have passwords which shall be encrypted, stored to a file, and then decrypted on request (e.g. if a specific user wants to re-login). Storing the passwords in a database is for our current experiment not an option (although it would be possibly a better solution).
According to your experiences: what would be a good way to approach this issue of encrypting and decrypting user data from e.g. text files? (Again: this is experimental and not meant for productive use in the current stage) | 0 | python,encryption,passwords | 2016-06-10T16:47:00.000 | 0 | 37,753,380 | PyNaCl supports multiple types of crypto primitives, but a password hashing scheme is none of them.
Using encryption for password storage is an anti-pattern. Where is the key stored to decrypt the encrypted password? If the key is stored somewhere in code or in some file in the file system, then the whole thing is nothing more than obfuscation. What if the key is lost? An attacker can directly decrypt the password and log in.
I'm assuming here that users don't actually type in keys, but rather passwords. If they would type in keys, then those keys could be used directly for PyNaCl encryption.
Instead, passwords should be hashed repeatedly and the hash stored. If a user tries to log in again, the password is hashed again with the same parameters (salt, iteration count, cost factor) and compared to the stored value. This is how it commonly solved in client-server applications, but it is not necessary to store the password hash anywhere, because PyNaCl's symmetric encryption also provides authentication (integrity). It means that you can detect a wrong password by deriving a key from that and attempting to decrypt the container. The password was wrong when PyNaCl produces an error (or the container was tampered with).
There are multiple schemes (PBKDF2, bcrypt, scrypt, Argon2) that can be used for this purpose, but none of them are included in PyNaCl. Although, the underlying libsodium supports two of them. | 0 | 665 | false | 0 | 1 | How to handle password management via PyNaCl? | 37,772,728 |
1 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | I am playing with IDLE. But it seems that intellisense in IDLE is a bit slow. When we type time. I need to wait a second or more for the intellisense to appear. What is the reason for this? I have heard that IDLE is developed in Python itself and that Python is a bit slower than other languages (slower but not notably so).
Now, is the slowness of Python the reason? | 0 | python,python-idle | 2016-06-15T04:08:00.000 | 0 | 37,825,965 | You did not specify the exact release you are using, but currently (Since about Sept 2014), IDLE makes changing the popup delay easy. Select Options and Configure Extensions if you see that choice. Otherwise select Configure IDLE and then the Extensions tab (since Fall 2015). In either case, select AutoComplete and change the popupwait. I happen to have reset to to 0 for myself. I think 2 seconds is too long, but changing the default is problematical. | 0 | 273 | false | 0 | 1 | Intellisence in IDLE is slow. Is the slowness of python the reason? | 37,849,395 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I am selenium python and I would like to speed up my tests. let's say 5 tests simultaneously. How can I achieve that on the single machine with the help of selenium grid | 0 | python,selenium | 2016-06-15T04:21:00.000 | 0 | 37,826,093 | You won't need a Selenium Grid for this. The Grid is used to distribute the test execution across multiple machines. Since you're only using one machine you don't need to use it.
You are running tests so I'm assuming you are using a test framework. You should do some research on how you can run tests in parallel using this framework.
There will probably also be a way to execute a function before test execution. In this function you can start the driver.
I'd be happy to give you a more detailed answer but your question is lacking the framework you are using to run the tests. | 0 | 303 | false | 0 | 1 | How to Speed up Test Execution Using Selenium Grid on single machine | 37,828,643 |
1 | 2 | 0 | 0 | 2 | 1 | 0 | 0 | I'd like to read folders and files structure inside a specified folder path on the P4 depot without syncing it. Is it possible? | 0 | python,python-2.7,perforce,p4python | 2016-06-15T14:37:00.000 | 0 | 37,838,526 | Note that running using Dirs and Files to recursively iterate through a directory tree is inefficient if you're planning to populate the entire tree.
If you need file info for all files under a directory, including its children, it's orders of magnitudes faster to just issue the "files" command to include the entire tree (i.e. path/... as opposed to path/*).
I suspect this is because the P4 server has no concept of directories, internally. A file's "directory" in P4 is just the last path-separated token in the file's path. So, it has to do extra work to slice its file set into a directory-specific list. | 0 | 2,045 | false | 0 | 1 | How to read depot's folders structure by p4python without syncing? | 56,567,412 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | So I am creating a program that takes input, processes the data, then puts it in Excel. In order to do this, I am using the "xlwt" package (and possibly xlrd). How do I then give this program to other people without making them download python and the packages associated with my program? I considered utilizing an online python interpreter and giving the username/password to my coworkers, but xlwt isn't on any of the ones I've tried, and they don't offer a way (that I can see) to download new packages. | 0 | python,python-3.5 | 2016-06-15T18:08:00.000 | 0 | 37,842,741 | You would have to compile the code into an exe file. The py2exe library can help you out with this | 0 | 32 | false | 0 | 1 | Universalizing my program/Making it accessible to other users | 37,842,951 |
1 | 2 | 0 | 2 | 3 | 1 | 1.2 | 0 | I've created a genetic programming system in Python, but am having troubles related to memory limits. The problem is with storing all of the individuals in my population in memory. Currently, I store all individuals in memory, then reproduce the next generation's population, which then gets stored in to memory. This means that I have two populations worth of individuals loaded in memory. After some testing, I've found that I exceed the default 2GB application memory size for Windows fairly quickly.
Currently, I write out the entire population's individual trees to a file, which I can then load and recreate the population if I want. What I have been considering is instead of having all of the individuals loaded in memory, access individual information by pulling the individual from the file and only instantiating that single individual. From my understanding of Python's readline functionality, it should only load a single line from the file at a time, instead of the entire file. If I did this, I think I would be able to only store in memory the individuals that I was currently manipulating.
My question is, is there an underlining problem with doing this that I'm not seeing right now? I understand that because I am dealing with data on disk instead of in memory my performance is going to take a hit, but for this situation memory is more important than speed. Also I don't want to increase the allotted 2GB of memory given to Python programs.
Thanks! | 0 | python,memory,memory-management,genetic-programming | 2016-06-16T05:50:00.000 | 0 | 37,850,869 | Given the RAM constraint, I'd change the population model from generational to steady state.
The idea is to iteratively breed a new child or two, assess their fitness and then reintroduce them directly into the population itself, killing off some preexisting individuals to make room for them.
Steady state uses half the memory of a traditional genetic algorithm because there is only one population at a time.
Changing the implementation shouldn't be too hard, but you have to pay attention to premature convergence (i.e. tweaks parameters like mutation rate, tournament size...).
The island model is another / additional possibility: population is broken into separate sub-populations (demes). Demes send individuals to one another to help spread news of newly-discovered fit areas of the space.
Usually it's a asynchronous mechanism but you could use a synchronous algorithm, loading demes one by one, with a great reduction of the required memory resources.
Of course you can write the population to a file and you can load just the needed individuals. If you choose this approach, it's probably a good idea to compute a hash signature of individuals to optimize the identification / loading speed.
Anyway you should consider that, depending on the task your GP system is performing, you could register a massive performance hit. | 0 | 132 | true | 0 | 1 | Storing objects in file instead of in memory | 37,853,966 |
2 | 4 | 0 | -1 | 17 | 1 | -0.049958 | 0 | I do things mostly in C++, where the destructor method is really meant for destruction of an acquired resource. Recently I started with python (which is really a fun and fantastic), and I came to learn it has GC like java.
Thus, there is no heavy emphasis on object ownership (construction and destruction).
As far as I've learned, the __init__() method makes more sense to me in python than it does for ruby too, but the __del__() method, do we really need to implement this built-in function in our class? Will my class lack something if I miss __del__()? The one scenario I could see __del__() useful is, if I want to log something when destroying an object. Is there anything other than this? | 0 | python | 2016-06-16T07:27:00.000 | 0 | 37,852,560 | So uncommon it is that I have learned about it today (and I'm long ago into python).
Memory is deallocated, files closed, ... by the GC. But you could need to perform some task with effects outside of the class.
My use case is about implementing some sort of RAII regarding some temporal directories. I'd like it to be removed no matter what.
Instead of removing it after the processing (which, after some change, was no longer run) I've moved it to the __del__ method, and it works as expected.
This is a very specific case, where we don't really care about when the method is called, as long as it's called before leaving the program. So, use with care. | 0 | 22,561 | false | 0 | 1 | Is __del__ really a destructor? | 50,852,984 |
2 | 4 | 0 | 2 | 17 | 1 | 0.099668 | 0 | I do things mostly in C++, where the destructor method is really meant for destruction of an acquired resource. Recently I started with python (which is really a fun and fantastic), and I came to learn it has GC like java.
Thus, there is no heavy emphasis on object ownership (construction and destruction).
As far as I've learned, the __init__() method makes more sense to me in python than it does for ruby too, but the __del__() method, do we really need to implement this built-in function in our class? Will my class lack something if I miss __del__()? The one scenario I could see __del__() useful is, if I want to log something when destroying an object. Is there anything other than this? | 0 | python | 2016-06-16T07:27:00.000 | 0 | 37,852,560 | Is del really a destructor?
No, __del__ method is not a destructor, is just a normal method you can call whenever you want to perform any operation, but it is always called before the garbage collector destroys the object.
Think of it like a clean or last will method. | 0 | 22,561 | false | 0 | 1 | Is __del__ really a destructor? | 37,852,707 |
1 | 2 | 0 | 0 | 2 | 0 | 1.2 | 1 | So I've been doing a lot of work with Tweepy and Twitter data mining, and one of the things I want to do is to be able to get all Tweets that are replies to a particular Tweet. I've seen the Search api, but I'm not sure how to use it nor how to search specifically for Tweets in reply to a specific Tweet. Anyone have any ideas? Thanks all. | 0 | python,search,twitter,tweepy | 2016-06-18T12:39:00.000 | 0 | 37,897,064 | I've created a workaround that kind of works. The best way to do it is to search for mentions of a user, then filter those mentions by in_reply_to_id . | 0 | 3,634 | true | 0 | 1 | Tweepy Get Tweets in reply to a particular tweet | 37,902,045 |
2 | 3 | 0 | 2 | 1 | 0 | 0.132549 | 0 | I am making a Simple Python Bot which can be run like python file.py . I created a Folder in my PC having 3 files file.py list.txt Procfile . In Procfile i wrote worker: python file.py , I choosed worker as it a Command Line application and my plan is to run that Python File forever on the server. Than i did git init , heroku git:remote -a py-bot-xyz where py-bot-xyz is the application which i created in My Heroku Dashboard and than git add ., git commit -am "make it better" & finally git push heroku master .
That's where the error occurs, that prints out
remote: Compressing source files... done.
remote: Building source:
remote:
remote:
remote: ! Push rejected, no Cedar-supported app detected
remote: HINT: This occurs when Heroku cannot detect the buildpack
remote: to use for this application automatically.
remote: See https://devcenter.heroku.com/articles/buildpacks
remote:
remote: Verifying deploy....
remote:
remote: ! Push rejected to py-bot-xyz.
remote:
To https://git.heroku.com/py-bot-xyz.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/py-bot-xyz.git'
Now, when i go to Heroku's Dashboard Build Failed in Activity. What can i do now? :((( | 0 | python,git,heroku,deployment | 2016-06-18T16:33:00.000 | 0 | 37,899,247 | To successfully push python code to heroku you should have a requirements.txt and a Procfile. Go to your project folder in terminal/commandline and enter the following commands which will generate the necessary files. Commit them and push should work.
pip freeze > requirements.txt(you might need to install pip, if using older python version)
echo "worker: python yourfile.py" > Procfile (worker could be replaced with web if it's a website) | 0 | 4,397 | false | 0 | 1 | Heroku Python Remote Rejected error | 44,854,965 |
2 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I am making a Simple Python Bot which can be run like python file.py . I created a Folder in my PC having 3 files file.py list.txt Procfile . In Procfile i wrote worker: python file.py , I choosed worker as it a Command Line application and my plan is to run that Python File forever on the server. Than i did git init , heroku git:remote -a py-bot-xyz where py-bot-xyz is the application which i created in My Heroku Dashboard and than git add ., git commit -am "make it better" & finally git push heroku master .
That's where the error occurs, that prints out
remote: Compressing source files... done.
remote: Building source:
remote:
remote:
remote: ! Push rejected, no Cedar-supported app detected
remote: HINT: This occurs when Heroku cannot detect the buildpack
remote: to use for this application automatically.
remote: See https://devcenter.heroku.com/articles/buildpacks
remote:
remote: Verifying deploy....
remote:
remote: ! Push rejected to py-bot-xyz.
remote:
To https://git.heroku.com/py-bot-xyz.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/py-bot-xyz.git'
Now, when i go to Heroku's Dashboard Build Failed in Activity. What can i do now? :((( | 0 | python,git,heroku,deployment | 2016-06-18T16:33:00.000 | 0 | 37,899,247 | I was facing same kind of error. As I am new to this area..
I used "requirement.txt" instead of "requirements.txt".
Watch out for the exact spellings. | 0 | 4,397 | false | 0 | 1 | Heroku Python Remote Rejected error | 55,756,831 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have a Python program I run at all times of the day that alerts me when something I am looking for online is posted. I want to give this to my employees but I only want to have it email them during business hours.
I have a Ubuntu server and use a .sh. I have a command in crontab that runs on startup.
How do I make my command run from 9-5? | 0 | python,bash,crontab,ubuntu-server | 2016-06-21T17:55:00.000 | 1 | 37,951,368 | You could create a cronjob that starts the script every 5 minutes (or whatever often you want it to run) and additionally modify the script such that it creates a .lock file which it removes on exiting, but if it encounters it at the beginning it won't do anything (this way you don't have a long-running script active multiple times). | 0 | 191 | false | 0 | 1 | How do I have a bash script continually run during business hours? | 37,951,458 |
1 | 1 | 0 | 2 | 0 | 0 | 0.379949 | 0 | I'm learning python for home automation a few months now and want to learn C# for building apps.
My python file is turning devices on and off automatically. Now I want to make an app that can read this python file, see if the device is on or off.( lamp=o or lamp=1 ). For this it must read a variable from python script.
Next I want to turn the device on or off on my mobile and with this action also change the variable in the script.
Is this all possible without making a text file for the status or using ironpython?
Read many stackoverflow questions about this, but all of them were using 1 device and most ironpython. If there is any good documentation about this subject I would be happy to receive it since I can't find one.
Thanks | 0 | c#,python | 2016-06-22T15:04:00.000 | 0 | 37,971,864 | I think the easiest way to do this would be to store the data in a shared resource of some kind. Perhaps your python script could store values in a database and your C# application could refer to the database to retrieve the state of your bulbs, switches, etc. | 0 | 120 | false | 0 | 1 | Is it possible to Pass values between C# and python and edit them both ways. | 37,971,906 |
1 | 1 | 0 | 1 | 2 | 1 | 0.197375 | 0 | Boost.python module provides a easy way of blinding c/c++ codes into Python. However, most tutorials assume that bjam is used to compile this module. I was wondering if I do not compile this module can I still use this module? What I mean "do not compile this module" is including all the source files of Boost.python in my current project. I did it for other modules from Boost. For example, the Boost.filesystem module, when I use this module, I just include all the files from this module and compile them with the codes I have written. Thanks. | 0 | python,c++,boost | 2016-06-22T16:36:00.000 | 0 | 37,973,803 | Yes, absolutely, it's a library like any other.
I always use it with CMake, but anything will do. You need to
Add to include paths the location of the boost headers.
Add to include paths the location of python headers (usually installed with Python, location depends on OS)
Link with the appropriate boost.python library (e.g. in my case it's boost_python-vc120-mt-1_58.lib or boost_python-vc120-mt-gd-1_58.lib, again depends on version/os/toolkit) | 0 | 305 | false | 0 | 1 | Can I compile boost.python module without bjam? | 37,979,379 |
1 | 1 | 0 | 1 | 2 | 0 | 0.197375 | 0 | In a server, I have a pyramid app running and I plan to have another tornado app running in parallel in the same machine. Let's say the pyramid app is available in www.abcd.com, then the tornado app will be available in www.abcd.com:9000.
I want only the authenticated users in the pyramid webapp to be able to access the tornado webapp.
My guess is somehow using cookie set by the pyramid app in tornado.
Is it possible? What is the best way to do that? | 0 | python,session,cookies,tornado,pyramid | 2016-06-23T12:52:00.000 | 1 | 37,992,209 | The two locations are separate origins in HTTP language. By default, they should not share cookies.
Before trying to figure out how to pass cookies around I'd try to set up a front end web server like Nginx that would proxy requests between two different backend servers. Both applications could get their own path, served from www.abcd.com. | 0 | 83 | false | 1 | 1 | How to use pyramid cookie to authenticate user in tornado web framework? | 38,032,565 |
2 | 2 | 0 | 3 | 0 | 0 | 1.2 | 1 | I am building a web crawler which has to crawl hundreds of websites. My crawler keeps a list of urls already crawled. Whenever crawler is going to crawl a new page, it first searches the list of urls already crawled and if it is already listed the crawler skips to the next url and so on. Once the url has been crawled, it is added to the list.
Currently, I am using binary search to search the url list, but the problem is that once the list grows large, searching becomes very slow. So, my question is that what algorithm can I use in order to search a list of urls (size of list grows to about 20k to 100k daily).
Crawler is currently coded in Python. But I am going to port it to C++ or other better languages. | 0 | python,c++,algorithm,search | 2016-06-23T17:19:00.000 | 0 | 37,998,013 | You have to decide at some point just how large you want your crawled list to become. Up to a few tens of millions of items, you can probably just store the URLs in a hash map or dictionary, which gives you O(1) lookup.
In any case, with an average URL length of about 80 characters (that was my experience five years ago when I was running a distributed crawler), you're only going to get about 10 million URLs per gigabyte. So you have to start thinking about either compressing the data or allowing re-crawl after some amount of time. If you're only adding 100,000 URLs per day, then it would take you 100 days to crawl 10 million URLs. That's probably enough time to allow re-crawl.
If those are your limitations, then I would suggest a simple dictionary or hash map that's keyed by URL. The value should contain the last crawl date and any other information that you think is pertinent to keep. Limit that data structure to 10 million URLs. It'll probably eat up close to 2 GB of space, what with dictionary overhead and such.
You will have to prune it periodically. My suggestion would be to have a timer that runs once per day and cleans out any URLs that were crawled more than X days ago. In this case, you'd probably set X to 100. That gives you 100 days of 100,000 URLs per day.
If you start talking about high capacity crawlers that do millions of URLs per day, then you get into much more involved data structures and creative ways to manage the complexity. But from the tone of your question, that's not what you're interested in. | 0 | 361 | true | 0 | 1 | Efficiently searching a large list of URLs | 37,998,220 |
2 | 2 | 0 | -1 | 0 | 0 | -0.099668 | 1 | I am building a web crawler which has to crawl hundreds of websites. My crawler keeps a list of urls already crawled. Whenever crawler is going to crawl a new page, it first searches the list of urls already crawled and if it is already listed the crawler skips to the next url and so on. Once the url has been crawled, it is added to the list.
Currently, I am using binary search to search the url list, but the problem is that once the list grows large, searching becomes very slow. So, my question is that what algorithm can I use in order to search a list of urls (size of list grows to about 20k to 100k daily).
Crawler is currently coded in Python. But I am going to port it to C++ or other better languages. | 0 | python,c++,algorithm,search | 2016-06-23T17:19:00.000 | 0 | 37,998,013 | I think hashing your values before putting them into your binary searched list- this will get rid of the probable bottleneck of string comparisons, swapping to int equality checks. It also keeps the O(log2(n)) binary search time- you may not get consistent results if you use python's builtin hash() between runs, however- it is implementation-specific. Within a run, it will be consistent. There's always the option to implement your own hash which can be consistent between sessions as well. | 0 | 361 | false | 0 | 1 | Efficiently searching a large list of URLs | 37,998,279 |
1 | 1 | 0 | 1 | 0 | 1 | 1.2 | 0 | What is the way to get the name of the package which creates specific dir under /usr/lib/python2.7/dist-packages/ on Ubuntu
For example I am trying to get the package name which installs /usr/lib/python2.7/dist-packages/hdinsight_common/ or /usr/lib/python2.7/dist-packages/hdinsight_common/decrypt.sh
can anyone help me with this ?
Thanks | 0 | python,python-2.7,ubuntu,pip | 2016-06-23T19:39:00.000 | 1 | 38,000,416 | Use dpkg -S <path...> for installed packages, or apt-file search <paths...> for packages that might not be installed. | 0 | 117 | true | 0 | 1 | How to get source packages from dir in /usr/lib/python2.7/dist-packages/ | 38,004,576 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | When I try to import passlib.hash in my python script I get a 502 error
502 - Web server received an invalid response while acting as a gateway or proxy server.
There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server.
The only modules I'm importing are:
import cgi, cgitb
import passlib.hash
passlib.hash works fine when I try in a normal python script or if I try importing in python interactive shell
using python 2.7, iis 8
when I browse on the localhost I get this
HTTP Error 502.2 - Bad Gateway
The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are "Traceback (most recent call last): File "C:##path remove##\test.py", line 2, in import passlib.hash ImportError: No module named passlib.hash ". | 0 | python-2.7,cgi,iis-8 | 2016-06-25T01:33:00.000 | 0 | 38,024,133 | I fixed the issue by uninstalling activePython which was installing modules under the users profile in the appdata folder.
This caused an issue where the anonymous isur of the website it no longer had access to the installed modules
I uninstall activePython and returned to the normal windows python install and re-installed the modules using PIP.
All scripts are working as expected, happy days. | 0 | 116 | false | 0 | 1 | Importing passlib.hash with CGI | 38,183,797 |
1 | 2 | 0 | 3 | 6 | 0 | 0.291313 | 0 | I'm trying to use AWS Lambda to transfer data from my S3 bucket to Couchbase server, and I'm writing in Python. So I need to import couchbase module in my Python script. Usually if there are external modules used in the script, I need to pip install those modules locally and zip the modules and script together, then upload to Lambda. But this doesn't work this time. The reason is the Python client of couchbase works with the c client of couchbase: libcouchbase. So I'm not clear what I should do. When I simply add in the c client package (with that said, I have 6 package folders in my deployment package, the first 5 are the ones installed when I run "pip install couchbase": couchbase, acouchbase, gcouchbase, txcouchbase, couchbase-2.1.0.dist-info; and the last one is the c client of Couchbase I installed: libcouchbase), lambda doesn't work and said:
"Unable to import module 'lambda_function': libcouchbase.so.2: cannot open shared object file: No such file or directory"
Any idea on how I can get the this work? With a lot of thanks. | 0 | python,couchbase,aws-lambda | 2016-06-25T18:34:00.000 | 0 | 38,031,729 | Following two things worked for me:
Manually copy /usr/lib64/libcouchbase.so.2 into ur project folder
and zip it with your code before uploading to AWS Lambda.
Use Python 2.7 as runtime on the AWS Lambda console to connect to couchbase.
Thanks ! | 0 | 348 | false | 0 | 1 | How to create AWS Lambda deployment package that uses Couchbase Python client | 54,285,148 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | My requirement is to communicate socketio with nodejs server to Raspberry Pi running a local Python app. Please help me. I can find ways of communication with web app on google but is there any way to communicate with Python local app with above mentioned requirements. | 0 | python,node.js,socket.io,raspberry-pi | 2016-06-25T20:17:00.000 | 0 | 38,032,608 | It's unclear exactly which part you need help with. To make a socket.io connection work, you do the following:
Run a socket.io server on one of your two computers. Make sure it is listening on a known port (it can share a port with a web server if desired).
On the other computer, get a socket.io client library and use that to make a socket.io connection to the other computer.
Register message handlers on both computers for whatever custom messages you intend to send each way and write the code to process those incoming messages.
Write the code to send messages to the other computer at the appropriate time.
Socket.io client and server libraries exist for both node.js and python so you can either type of library for either type of system.
The important things to understand are that you must have a socket.io server up and running. The other endpoint then must connect to that server. Once the connection is up and running, you can then send message from either end to the other end.
For example, you could set up a socket.io server on node.js. Then, use a socket.io client library for python to make a socket.io connection to the node.js server. Then, once the connection is up and running, you are free to send messages from either end to the other and, if you have, message handlers listening for those specific messages, they will be received by the other end. | 0 | 885 | true | 0 | 1 | Raspberry Pi python app and nodejs socketio communication | 38,032,700 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I have a Python function get_messages() that is able to retrieve messages from another application via a dll. These messages arrive at a rate of about 30hz, and I need to fill a buffer with these messages, while the main Python application is running and doing things with theses messages. I believe the filling of the buffer should occur in a separate thread. My question is: what is the best Pythonic way to retrieve these messages ? (running a loop in a separate thread is probably not the best solution). Is there a module that is dedicated to this sort of tasks? | 0 | python,ipc,dllimport | 2016-06-26T07:36:00.000 | 0 | 38,036,233 | Answer according to Doug Ross: consider the Asyncio module. | 0 | 86 | false | 0 | 1 | python - communicating with other applications at high rate | 38,201,278 |
1 | 1 | 0 | 2 | 3 | 1 | 0.379949 | 0 | I have a binary file that contains several complex numbers of type complex64? (i.e. four bytes of type float for the real part and another four bytes for the imaginary part). The real and imaginary parts are multiplexed so that the real part is stored first and followed by the imaginary part. | 0 | python | 2016-06-26T23:27:00.000 | 0 | 38,044,103 | I was able to reproduce the error you encounter by creating an array of complex64 from [0, 2+j, -3.14-7.99j], saving it to a file and reading it as Python built-in complex type.
The issue is that the built-in complex type has the size of a C double which, depending on your plateform, may differ from 32-bits (256 bits on my machine).
You must use numpy.fromfile('file_name', dtype=numpy.complex64) to read your file correctly, i.e. make sure the complex numbers are read as two 32-bits floating point numbers. | 0 | 1,833 | false | 0 | 1 | How to read a binary file of type complex64 values in Python | 38,082,041 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I attempted to change the character encoding to UTF-16 and it changed all of my text in Eclipse's text editor to Chinese. A ctrl-z saved my work, but now the console is stuck in Chinese.
When running an arbitrary python script, the script terminates immediately and gives the following message: "†䙩汥•䌺屄敶屗..." (The string goes on for much longer, but stackoverflow detects it as spam)
What does this mean? I've tried resetting things to default but to no avail. | 0 | python,eclipse,utf-8 | 2016-06-27T07:14:00.000 | 0 | 38,047,915 | Edit -> Set encoding UTF-16 screwed up my text again. another ctrl-z and Edit->set encoding ASCII fixed it. | 0 | 39 | true | 1 | 1 | unexpected Chinese output from eclipse console | 38,047,976 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I´ve a "binary" file with variable size record. Each record is composed of an amount of little endians 2 byte-sized integer numbers. I know the start position of each record and it´s size.
What´s the fastest way to read this to a Python array of integer? | 0 | python,file,integer | 2016-06-28T13:26:00.000 | 0 | 38,077,570 | I don't think you can do better than opening the file and reading the size of each recorde then use struct.unpack('<i', buff) for each integer you want to read, file.read(2), will get you 2 integers. | 0 | 82 | false | 0 | 1 | Reading integers from file using Python | 38,079,966 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I would like to use the mcp3008 to drive motors, or switch on led arrays for example, until now i only found how to read analog sensor using raspberry pi gpio.
thanks in advance | 0 | python,arduino,raspberry-pi,spi,raspberry-pi3 | 2016-06-28T16:34:00.000 | 0 | 38,081,695 | No you can't. MCP3008 is an analog-to-digital converter. It is an input device. | 0 | 94 | true | 0 | 1 | could i use a MCP3008 to output? | 38,116,678 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 1 | I need to send email with same content to 1 million users.
Is there any way to do so by writing script or something?
Email Id's are stored in Excel format. | 0 | python,python-2.7,smtp | 2016-06-29T04:50:00.000 | 0 | 38,090,643 | It is absolutely possible for a bot to be made that creates gmail accounts, in fact many already exist. The main problem is how to solve the captcha that is required for each new account, however there are services already built to handle this. The only problem then is being willing to violate googles terms of services, as I'm sure this does in one way or another. | 0 | 113 | false | 0 | 1 | Automatic email sending from a gmail account using script | 38,090,686 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | Im trying to modify the ~/src/mem/cache/ scripts and code to make a region base cache system for the ARM architecture. so far I have managed to change the SConscript so that a copy of cache.cc,cache.hh and Cache.py is built in the scons but I dont know where I should redirect the memory accessees to the region caches. In other words: I want to be able to direct some mem ref.s based on their mem. address to access D-cacheA and the rest to D-cacheB while cache A & B are the same. | 0 | python,c++,caching,memory-management,gem5 | 2016-06-29T08:56:00.000 | 1 | 38,094,764 | IIUC, you are trying to track misses to a phyAddr across cache levels. I think you can do that by modifying appropriate Request/Response in /src/mem/protocol/*-msg.sm | 0 | 228 | false | 0 | 1 | How can I create a region cache in gem5 | 43,447,954 |
1 | 1 | 0 | 1 | 0 | 1 | 0.197375 | 0 | I have some packages installed under my ~/.local/lib/python2.7/site-packages/ subdir, which was for use with system python (/usr/bin/python). Now I have just installed Anaconda python (which is also python 2.7, but minor version 11). The whole idea of Anaconda distro is to have a self-containing python environment, such that EVERY module resides within anaconda install tree.
But what annoys me is that for some reason I cannot disable inclusion of ~/.local/lib/python2.7/site-packages/ from sys.path although I did not have PYTHONPATH environment variable. Is it possible to run python executable (in this case, Anaconda's python executable) without having to implicitly add ~/.local/lib/python2.7/site-packages/ and the eggs underneath it in the python search path?
Why this problem? Unfortunately the ~/.local/lib/python2.7/site-packages/easy-install.pth also contains a reference to /usr/lib/python2.7/dist-packages, which causes this system-wide dist-packages to still be searched for. | 0 | python,anaconda | 2016-06-30T16:47:00.000 | 1 | 38,129,077 | Well, there is a -s flag in python executable to disable searching the user site directory (`~/.local/lib/python2.7/site-packages etc). That solves the problem above! | 0 | 645 | false | 0 | 1 | How to run python without including ~/.local/lib/pythonX.Y/site-packages in its module search path | 38,129,078 |
2 | 2 | 0 | 3 | 5 | 0 | 0.291313 | 0 | What is the difference between StringIO and ByteIO? And what sorts of use cases would you use each one for? | 0 | python | 2016-06-30T18:42:00.000 | 0 | 38,130,962 | StringIO is for text. You use it when you have text in memory that you want to treat as coming from or going to a file. BytesIO is for bytes. It's used in similar contexts as StringIO, except with bytes instead of text. | 0 | 2,153 | false | 0 | 1 | What is the difference between StringIO and ByteIO? | 38,131,236 |
2 | 2 | 0 | 5 | 5 | 0 | 1.2 | 0 | What is the difference between StringIO and ByteIO? And what sorts of use cases would you use each one for? | 0 | python | 2016-06-30T18:42:00.000 | 0 | 38,130,962 | As the name says, StringIO works with str data, while BytesIO works with bytes data. bytes are raw data, e.g. 65, while str interprets this data, e.g. using the ASCII encoding 65 is the letter 'A'.
bytes data is preferable when you want to work with data agnostically - i.e. you don't care what is contained in it. For example, sockets only transmit raw bytes data.
str is used when you want to present data to users, or interpret at a higher level. For example, if you know that a file contains text, you can directly interpret the raw bytes as text. | 0 | 2,153 | true | 0 | 1 | What is the difference between StringIO and ByteIO? | 38,131,261 |
1 | 4 | 0 | 0 | 1 | 0 | 0 | 1 | I've been trying to extract the domain names from a list of urls, so that http://supremecosts.com/contact-us/ would become http://supremecosts.com. I'm trying to find a clean way of doing it that will be adaptable to various gtlds and cctlds. | 0 | python | 2016-07-02T07:15:00.000 | 0 | 38,157,567 | Probably a silly, yet valid way of doing this is:
Save the URL in a string and scan it from back to front. As soon as you come across a full stop, scrap everything from 3 spaces ahead. I believe urls do not have full stops after the domain names. Please correct me if I am wrong. | 0 | 282 | false | 0 | 1 | Extract domain name only from url, getting rid of the path (Python) | 38,157,658 |