Q_CreationDate
stringlengths
23
23
Title
stringlengths
11
149
Question
stringlengths
25
6.53k
Answer
stringlengths
15
5.1k
Score
float64
-1
1.2
Is_accepted
bool
2 classes
N_answers
int64
1
17
Q_Id
int64
0
6.76k
2019-10-18 20:56:03.203
How to write in discord with discord.py without receiving a message?
I need to write some messages in discord with my bot, but I don't know how to do it. It seems that discord.py can't send messages autonomously. Does anyone know how to do it?
I solved putting a while loop inside the function on_message. So I need to send only a message and then my bot can write as many messages as he wants
0
false
1
6,357
2019-10-20 03:58:47.513
How can I cancel an active boto3 s3 file_download?
I'm using boto3 to download files from an s3 bucket & I need to support canceling an active file transfer in my client UI - but I can't find how to do it. There is a progress callback that I can use for transfer status, but I can not cancel the transfer from there. I did find that boto3's s3transfer.TransferManager object has a .shutdown() member, but it is buggy (.shutdown() passes the wrong params to ._shutdown() a few lines below it) & crashes. Is there another way to safely cancel an active file_download?
Can you kill the process associated with the file? kill $(ps -ef | grep 'process-name' | awk '{print $2}')
0.201295
false
1
6,358
2019-10-20 13:24:41.453
How do I limit the number of times a character appears in a string in python?
I'm a beginner, its been ~2 months since i started learning python. I've written a code about a function that takes two strings, and outputs the common characters between those 2 strings. The issue with my code is that it returns all common characters that the two inputs have. For example: input: common, moron the output is "oommoon" when ideally it should be "omn". i've tried using the count() function, and then the replace function, but it ended up completely replacing the letters that were appearing more than once in the output, as it should. how should i go about this? i mean it's probably an easy solution for most of the ppl here, but what will the simplest approach be such that i, a beginner with okay-ish knowledge of the basics, understand it?
You can try this: ''.join(set(s1).intersection(set(s2)))
0
false
1
6,359
2019-10-20 20:53:03.697
How to iterate over a dictionary with tuples?
So,I need to iterate over a dictionary in python where the keys are a tuple and the values are integers. I only need to print out the keys and values. I tried this: for key,value in dict: but didn't work because it assigned the first element of the tuple to the key and value and the second to the value. So how should I do it?
Just use for key in dict and then access the value with dict[key]
0.101688
false
1
6,360
2019-10-21 08:36:05.530
how to use 1D-convolutional neural network for non-image data
I have a dataset that I have loaded as a data frame in Python. It consists of 21392 rows (the data instances, each row is one sample) and 79 columns (the features). The last column i.e. column 79 has string type labels. I would like to use a CNN to classify the data in this case and predict the target labels using the available features. This is a somewhat unconventional approach though it seems possible. However, I am very confused on how the methodology should be as I could not find any sample code/ pseudo code guiding on using CNN for Classifying non-image data, either in Tensorflow or Keras. Any help in this regard will be highly appreciated. Cheers!
You first have to know, if it is sensible to use CNN for your dataset. You could use sliding 1D-CNN if the features are sequential eg) ECG, DNA, AUDIO. However I doubt that this is not the case for you. Using a Fully Connected Neural Net would be a better choice.
0.386912
false
1
6,361
2019-10-21 09:25:49.003
Training in Python and Deploying in Spark
Is it possible to train an XGboost model in python and use the saved model to predict in spark environment ? That is, I want to be able to train the XGboost model using sklearn, save the model. Load the saved model in spark and predict in spark. Is this possible ? edit: Thanks all for the answer , but my question is really this. I see the below issues when I train and predict different bindings of XGBoost. During training I would be using XGBoost in python, and when  predicting I would be using XGBoost in mllib. I have to load the saved model from XGBoost python (Eg: XGBoost.model file) to be predicted in spark, would this model be compatible to be used with the predict function in the mllib The data input formats of both XGBoost in python and XGBoost in spark mllib are different. Spark takes vector assembled format but with python, we can feed the dataframe as such. So, how do I feed the data when I am trying to predict in spark with a model trained in python. Can I feed the data without vector assembler ? Would XGboost predict function in spark mllib take non-vector assembled data as input ?
you can load data/ munge data using pyspark sql, then bring data to local driver using collect/topandas(performance bottleneck) then train xgboost on local driver then prepare test data as RDD, broadcast the xgboost model to each RDD partition, then predict data in parallel This all can be in one script, you spark-submit, but to make the things more concise, i will recommend split train/test in two script. Because step2,3 are happening at driver level, not using any cluster resource, your worker are not doing anything
0
false
2
6,362
2019-10-21 09:25:49.003
Training in Python and Deploying in Spark
Is it possible to train an XGboost model in python and use the saved model to predict in spark environment ? That is, I want to be able to train the XGboost model using sklearn, save the model. Load the saved model in spark and predict in spark. Is this possible ? edit: Thanks all for the answer , but my question is really this. I see the below issues when I train and predict different bindings of XGBoost. During training I would be using XGBoost in python, and when  predicting I would be using XGBoost in mllib. I have to load the saved model from XGBoost python (Eg: XGBoost.model file) to be predicted in spark, would this model be compatible to be used with the predict function in the mllib The data input formats of both XGBoost in python and XGBoost in spark mllib are different. Spark takes vector assembled format but with python, we can feed the dataframe as such. So, how do I feed the data when I am trying to predict in spark with a model trained in python. Can I feed the data without vector assembler ? Would XGboost predict function in spark mllib take non-vector assembled data as input ?
You can run your python script on spark using spark-submit command so that can compile your python code on spark and then you can predict the value in spark.
0
false
2
6,362
2019-10-22 07:03:31.320
Converting the endianness type of an already existing binary file
I have a binary file on my PC that contains data in big-endian. The file contains around 121 MB. The problem is I would like to convert the data into little-endian with a python script. What is currently giving me headaches is the fact that I don't know how to convert an entire file. If I would have a short hex string I could simply use struct.pack to convert it into little-endian but if I see this correctly I can't give struct.pack a binary file as input. Is there an other function/utility that I can use to do that or how should my approach look like?
We need a document or knowledge of the file's exact structure. Suppose that there is a 4 byte file. If this file has just a int, we need to flip that. But if it is a combination of 4 char, we should leave it as it be. Above all, you should find the structure. Then we can talk about the translation. I think there is no translation tools to support general data, but you need to parse that binary file following the structure.
0
false
1
6,363
2019-10-24 17:06:41.350
How to solve problem related to BigQueryError "reason": "invalid", "location": "test", "debugInfo": "", "message": "no such field."
Someone worked before with streaming data into (google) BigQuery using Google Cloud Functions (insert_rows_from_dataframe())? My problem is it seems like sometimes the table schema is not updated immediately and when you try to load some data into table immediately after creation of a new field in the schema it returns an error: BigQueryError: [{"reason": "invalid", "location": "test", "debugInfo": "", "message": "no such field."}]" However, if I try to load again after few seconds it all works fine, so my question if someone knows the maximum period of time in seconds for this updating (from BigQuery side) and if is possible somehow to avoid this situation?
Because the API operation on BigQuery side is not atomic, you can't avoid this case. You can only mitigate the impact of this behavior and perform a sleep, a retries, or set a Try-catch to replay the insert_rows_from_dataframe() several times (not infinite, in case of real problem, but 5 times for example) until it pass. Nothing is magic, if the consistency is not managed on a side, the other side has to handle it!
0.386912
false
1
6,364
2019-10-24 18:04:12.393
Kivy_deps.glew.whl is not a supported wheel on this version
I was trying to install kivy_deps.glew(version).whl with pip install absolute/path/to/file/kivy_deps.glew And I get this error: kivy_deps.glew(version).whl is not a supported wheel on this version I searched in the web and saw that some people said that the problem is because you shoud have python 2.7 and I have python 3.7. The version is of glew is cp27. So if this is the problem how to install python 2.7 and 3.7 in the same time and how to use both of them with pip.(i.e maybe you can use pip2.7 install For python 2.7 and pip install For python 3.7 P.S: My PC doesn't have an internet connection that's why i'm installing it with a wheel file. I have installed all dependecies except glew and sdl2. If there is any unofficial file for these two files for python 3.7 please link them. I know this question has been asked before in stackoverflow but I didn't get any solution from it(it had only 1 anwser tho) Update: I uninstalled python 3.7 and installed python 2.7, but pip and python weren't commands in cmd because python 2.7 hadn't pip. So I reinstalled python 3.7
I fixed it. Just changed in the name of the file cp27 to cp37
1.2
true
1
6,365
2019-10-24 18:19:04.017
How to connect ML model which is made in python to react native app
i made a one ML model in python now i want to use this model in react native app means that frontend will be based on react native and model is made on python,how can i connect both thing with each other
create a REST Api in flask/django to deploy your model on server.create end points for separate functions.Then call those end points in your react native app.Thats how it works.
0.135221
false
2
6,366
2019-10-24 18:19:04.017
How to connect ML model which is made in python to react native app
i made a one ML model in python now i want to use this model in react native app means that frontend will be based on react native and model is made on python,how can i connect both thing with each other
You can look into the CoreMl library for react native application if you are developing for IOS platform else creating a restAPI is a good option. (Though some developers say that latency is an issue but it also depends on what kind of model and dataset you are using ).
0
false
2
6,366
2019-10-25 15:15:17.720
can pandas autocorr handle irregularly sample timeseries data?
I have a dataframe with datetime index, where the data was sampled irregularly (the datetime index has gaps, and even where there aren't gaps the spacing between samples varies). If I do: df['my column'].autocorr(my_lag) will this work? Does autocorr know how to handle irregularly sampled datetime data?
This is not quite a programming question. Ideally, your measure of autocorrelation would use data measured at the same frequency/same time interval between observations. Any autocorr function in any programming package will simply measure the correlation between the series and whatever lag you want. It will not correct for irregular frequencies. You would have to fix this yourself but 1) setting up a series with a regular frequency, 2) mapping the actual values you have to the date structure, 3) interpolating values where you have gaps/NaN, and then 4) running your autocorr. Long story short, autocorr would not do all this work for you. If I have misunderstood the problem you are worried about, let me know. It would be helpful to know a little more about the sampling frequencies. I have had to deal with things like this a lot.
0
false
1
6,367
2019-10-25 16:46:06.047
Should modules always contain a class?
I'm writing a module which only contains functions. Is it good practice to put these inside a class, even if there are no class arguments and the __init__ function is pointless? And if so how should I write it?
It is good to build modules that contain a class for better organization and manipulation depending on how big the code is and how it will be used, but yes it is good to get use to building classes with methods in them. Can you post your code?
0
false
1
6,368
2019-10-26 03:37:29.547
Internet checksum -- Adding hex numbers together for checksum
I came across the following example of creating an Internet Checksum: Take the example IP header 45 00 00 54 41 e0 40 00 40 01 00 00 0a 00 00 04 0a 00 00 05: Adding the fields together yields the two’s complement sum 01 1b 3e. Then, to convert it to one’s complement, the carry-over bits are added to the first 16-bits: 1b 3e + 01 = 1b 3f. Finally, the one’s complement of the sum is taken, resulting to the checksum value e4c0. I was wondering how the IP header is added together to get 01 1b 3e?
The IP header is added together with carry in hexadecimal numbers of 4 digits. i.e. the first 3 numbers that are added are 0x4500 + 0x0054 + 0x41e0 +...
0.201295
false
1
6,369
2019-10-27 18:36:01.290
Access to data in external hdd from jupyter notebook
I am a python3 beginner, and I've been stuck on how to utilize my data at my scripts. My data is stored in an external hdd and I am seeking for the way to retrieve the data to use on a program in jupyter notebook somehow. Does anyone know how to make an access to external hdd?
Hard to say what the issue is without seeing any code. In general make sure your external hard drive is connected to your machine, and when loading your data (depends on what kind of data you want to use) specify the full path to your data.
1.2
true
1
6,370
2019-10-28 17:00:37.963
Scheduling Emails with Django?
I want to schedule emails using Django. Example ---> I want to send registered users their shopping cart information everyday at 5:00 P.M. How would I do this using Django? I have read a lot of articles on this problem but none of them have a clear and definite solution. I don't want to implement a workaround. Whats the proper way of implementing this? Can this be done within my Django project or do I have to use some third-party service? If possible, please share some code. Otherwise, details on how I can implement this will do.
There's no built-in way to do what you're asking. What you could do, though, is write a management command that sends the emails off and then have a crontab entry that calls that command at 5PM (this assumes your users are in the same timezone as your server). Another alternative is using celery and celery-beat to create scheduled tasks, but that would require more work to set up.
0.386912
false
1
6,371
2019-10-28 17:46:00.680
Storing multiple values in one column
I am designing a web application that has users becoming friends with other users. I am storing the users info in a database using sqlite3. I am brainstorming on how I can keep track on who is friends with whom. What I am thinking so far is; to make a column in my database called Friendships where I store the various user_ids( integers) from the user's friends. I would have to store multiple integers in one column...how would I do that? Is it possible to store a python list in a column? I am also open to other ideas on how to store the friendship network information in my database.... The application runs through FLASK
It is possible to store a list as a string into an sql column. However, you should instead be looking at creating a Friendships table with primary keys being the user and the friend. So that you can call the friendships table to pull up the list of friends. Otherwise, I would suggest looking into a Graph Database, which handles this kind of things well too.
0.135221
false
2
6,372
2019-10-28 17:46:00.680
Storing multiple values in one column
I am designing a web application that has users becoming friends with other users. I am storing the users info in a database using sqlite3. I am brainstorming on how I can keep track on who is friends with whom. What I am thinking so far is; to make a column in my database called Friendships where I store the various user_ids( integers) from the user's friends. I would have to store multiple integers in one column...how would I do that? Is it possible to store a python list in a column? I am also open to other ideas on how to store the friendship network information in my database.... The application runs through FLASK
What you are trying to do here is called a "many-to-many" relationship. Rather than making a "Friendships" column, you can make a "Friendship" table with two columns: user1 and user2. Entries in this table indicate that user1 has friended user2.
0.135221
false
2
6,372
2019-10-30 12:29:29.603
Display two animations at the same time with Manim
Manim noobie here. I am trying to run two animations at the same time, notably, I'm trying to display a dot transitioning from above ending up between two letters. Those two letters should create some space in between in the meantime. Any advice on how to do so? Warm thanks in advance.
To apply two transformations at the same time, you can do self.play(Transformation1, Transformation2). This way, since the two Transformations are in the same play statement, they will run simultaneously.
1.2
true
1
6,373
2019-10-31 10:58:42.647
Bloomberg API how to get only the latest quote to a given time specified by the user in Python?
I need to query from the BBG API the nearest quote to 14:00 o'clock for a number of FX currency pairs. I read the developers guide and I can see that reference data request provides you with the latest quote available for a currency however if I run the request at 14.15 it will give me the nearest quote to that time not 14.00. Historical and intraday data output too many values as I need only the latest quote to a given time. Would you be able to advise me if there is a type of request which will give me what I am looking for.
Further to previous suggestions, you can start subscription to //blp/mktdata service before 14:00 for each instrument to receive stream of real-time ticks. Cache the last tick, when hitting 14:00 mark the cache as pre-14:00, then mark the first tick after as post:14, select the nearest to 14:00 from both.
0
false
1
6,374
2019-11-01 03:45:32.480
In a Python bot, how to run a function only once a day?
I have a Python bot running PRAW for Reddit. It is open source and thus users could schedule this bot to run at any frequency (e.g. using cron). It could run every 10 minutes, or every 6 hours. I have a specific function (let's call it check_logs) in this bot that should not run every execution of this bot, but rather only once a day. The bot does not have a database. Is there a way to accomplish this in Python without external databases/files?
Generally speaking, it's better (and easier) to use the external database or file. But, if you absolutely need it you could also: Modify the script itself, e.g. store the date of the last run in commented out last line of the script. Store the date of the last update on the web, for example, in your case it could be a Reddit post or google doc or draft email or a site like Pastebin, etc. Change the "modified date" of the script itself and use it as a reference.
0
false
1
6,375
2019-11-01 12:00:44.503
how to solve fbs error 'Can not find path ./libshiboken2.abi3.5.13.dylib'?
I have been able to freeze a Python/PySide2 script with fbs on macOS, and the app seems to work. However, I got some errors from the freeze process stating: Can not find path ./libshiboken2.abi3.5.13.dylib. Does anyone know how to fix that?
Try to use the --runtime-tmpdir because while running the generated exe file it needs this file libshiboken2.abi3.5.13.dylib and unable hook that file. Solution: use --add-data & --runtime-tmpdir to pyinstaller command line. pyinstaller -F --add-data "path/libshiboken2.abi3.5.13.dylib":"**PATH" --runtime-tmpdir temp_dir_name your_program.py here PATH = the directory name of that file looking for.-F = one file
0
false
1
6,376
2019-11-02 21:15:06.757
How to get to the first 4 numbers of an int number ? and also the 5th and 6th numbers for example
I have a function that checks if a date ( int number ) that is written in this format: "YYYYMMDD" is valid or not. My question is how do i get to the first 4 numbers for example ( the year )? the month ( the 5th and 6th number ) and the days. Thanks
Probably the easiest way would be to convert it to a string and use substrings or regular expressions. If you need performance, use a combination of modulo and division by powers of 10 to extract the desired parts.
0.201295
false
1
6,377
2019-11-02 21:21:16.990
Can aubio be used to detect rhythm-only segments?
Does aubio have a way to detect sections of a piece of audio that lack tonal elements -- rhythm only? I tested a piece of music that has 16 seconds of rhythm at the start, but all the aubiopitch and aubionotes algorithms seemed to detect tonality during the rhythmic section. Could it be tuned somehow to distinguish tonal from non-tonal onsets? Or is there a related library that can do this?
Use a spectrum analyser to detect sections with high amplitude. If you program - you could take each section and make an average of the freqencies (and amplitudes) present to give you an idea of the instrument(s) involved in creating that amplitude peak. Hope that helps - if you're using python I could give you some pointers how to program this!? Regards Tony
0
false
1
6,378
2019-11-03 23:03:00.293
Project organization with Tensorflow.keras. Should one subclass tf.keras.Model?
I'm using Tensorflow 1.14 and the tf.keras API to build a number (>10) of differnet neural networks. (I'm also interested in the answers to this question using Tensorflow 2). I'm wondering how I should organize my project. I convert the keras models into estimators using tf.keras.estimator.model_to_estimator and Tensorboard for visualization. I'm also sometimes using model.summary(). Each of my models has a number (>20) of hyperparameters and takes as input one of three types of input data. I sometimes use hyperparameter optimization, such that I often manually delete models and use tf.keras.backend.clear_session() before trying the next set of hyperparameters. Currently I'm using functions that take hyperparameters as arguments and return the respective compiled keras model to be turned into an estimator. I use three different "Main_Datatype.py" scripts to train models for the three different input data types. All data is loaded from .tfrecord files and there is an input function for each data type, which is used by all estimators taking that type of data as input. I switch between models (i.e. functions returning a model) in the Main scripts. I also have some building blocks that are part of more than one model, for which I use helper functions returning them, piecing together the final result using the Keras functional API. The slight incompatibilities of the different models are begining to confuse me and I've decided to organise the project using classes. I'm planing to make a class for each model that keeps track of hyperparameters and correct naming of each model and its model directory. However, I'm wondering if there are established or recomended ways to do this in Tensorflow. Question: Should I be subclassing tf.keras.Model instead of using functions to build models or python classes that encapsulate them? Would subclassing keras.Model break (or require much work to enable) any of the functionality that I use with keras estimators and tensorboard? I've seen many issues people have with using custom Model classes and am somewhat reluctant to put in the work only to find that it doesn't work for me. Do you have other suggestions how to better organize my project? Thank you very much in advance.
Subclass only if you absolutely need to. I personally prefer following the following order of implementation. If the complexity of the model you are designing, can not be achieved using the first two options, then of course subclassing is the only option left. tf.keras Sequential API tf.keras Functional API Subclass tf.keras.Model
1.2
true
1
6,379
2019-11-04 08:48:35.323
How to automate any application variable directly without GUI with Python?
I need to automate some workflows to control some Mac applications, I have got a way to do this with Pyautogui module,but I don't want to simulate keyboard or mouse actions anymore, I think if I can get the variables under any GUI elements and program with them directly it would be better, how can I do this?
This is not possible unless the application has some kind of api. For Web GUIs you can use Selenium and directly select the DOM elements.
0.386912
false
1
6,380
2019-11-04 20:54:43.360
Is it possible to use socketCAN protocol on MacOS
I am looking to connect to a car wirelessly using socketCAN protocol on MacOS using the module python-can on python3. I don't know how to install the socketCAN protocol on MacOS. Pls help.
SocketCAN is implemented only for the Linux kernel. So it is not available on other operating systems. But as long as your CAN adapter is supported by python-can, you don't need SocketCAN.
0
false
1
6,381
2019-11-05 07:32:04.260
Is there any built-in functionality in django to call a method on a specific day of the month?
Brief intro of the app: I'm working on MLM Webapp and want to make payment on every 15th and last day of every month. Calculation effect for every user when a new user comes into the system. What I did [ research ] using django crontab extension celery Question is: -- Concern about the database insertion/update query: on the 15th-day hundreds of row generating with income calculation for users. so is there any better option to do that? how to observe missed and failed query transaction? Please guide me, how to do this with django, Thanks to everyone!
For your 1st question, i don't think there will be any issue if you're using celery and celery beat for scheduling this task. Assuming your production server has 2 cores (so 4 threads hopefully), you can configure your celery worker (not the beat scheduler) to run using 1 worker with 1/2 thread. At the 15th of a month, beat will see that a task is due and will call your celery worker to accomplish this task. While doing this your worker will be using 1 thread and the other threads will be open (so your server won't go down). There are different ways to configure your celery worker depending on your use case (e.g. using gevent rather than regular thread), but the basic config should be fine. Well I think you should keep a column in your table to track which ones were successfully handled by your code, and which failed. Celery dashboards will only show if total work succeeded or not, and won't give any further insights. Hope this helps!
1.2
true
1
6,382
2019-11-05 12:45:32.743
Cluster identification with NN
I have a dataframe containing the coordinates of millions of particles which I want to use to train a Neural network. These particles build individual clusters which are already identified and labeled; meaning that every particle is already assigned to its correct cluster (this assignment is done by a density estimation but for my purpose not that relevant). the challenge is now to build a network which does this clustering after learning from the huge data. there are also a few more features in the dataframe like clustersize, amount of particles in a cluster etc. since this is not a classification problem but more a identification of clusters-challenge what kind of neural network should i use? I have also problems to build this network: for example a CNN which classifies wheather there is a dog or cat in the picture, the output is obviously binary. so also the last layer just consists of two outputs which represent the probability for being 1 or 0. But how can I implement the last layer when I want to identify clusters? during my research I heard about self organizing maps. would these networks do the job? thank you
These particles build individual clusters which are already identified and labeled; meaning that every particle is already assigned to its correct cluster (this assignment is done by a density estimation but for my purpose not that relevant). the challenge is now to build a network which does this clustering after learning from the huge data. Sounds pretty much like a classification problem to me. Images themselves can build clusters in their image space (e.g. a vector space of dimension width * height * RGB). since this is not a classification problem but more a identification of clusters-challenge what kind of neural network should i use? You have data of coordinates, you have labels. Start with a simple fully connected single/multi-layer-perceptron i.e. vanilla NN, with as many outputs as number of clusters and softmax-activation function. There are tons of blogs and tutorials for Deep Learning libraries like keras out there in the internet.
0
false
2
6,383
2019-11-05 12:45:32.743
Cluster identification with NN
I have a dataframe containing the coordinates of millions of particles which I want to use to train a Neural network. These particles build individual clusters which are already identified and labeled; meaning that every particle is already assigned to its correct cluster (this assignment is done by a density estimation but for my purpose not that relevant). the challenge is now to build a network which does this clustering after learning from the huge data. there are also a few more features in the dataframe like clustersize, amount of particles in a cluster etc. since this is not a classification problem but more a identification of clusters-challenge what kind of neural network should i use? I have also problems to build this network: for example a CNN which classifies wheather there is a dog or cat in the picture, the output is obviously binary. so also the last layer just consists of two outputs which represent the probability for being 1 or 0. But how can I implement the last layer when I want to identify clusters? during my research I heard about self organizing maps. would these networks do the job? thank you
If you want to treat clustering as a classification problem, then you can try to train the network to predict whether two points belong to the same clusters or to different clusters. This does not ultimately solve your problems, though - to cluster the data, this labeling needs to be transitive (which it likely will not be) and you have to label n² pairs, which is expensive. Furthermore, because your clustering is density-based, your network may need to know about further data points to judge which ones should be connected...
0.201295
false
2
6,383
2019-11-05 20:27:30.517
Implementing a built in GUI with pymunk and pygame in Python?
I am looking to make a python program in which I can have a sidebar GUI along with an interactive 2d pymunk workspace to the right of it, which is to be docked within the same frame. Does anyone know how I might implement this?
My recommendation is to use pygame as your display. If an object is chosen, you can add it to the pymunk space at the same time as using pymunk to get each body's space and draw it onto the display. This is how I've written my games.
0
false
1
6,384
2019-11-06 17:07:27.240
Set PYTHONPATH for local Jupyter Notebook in VS Code
I'm using Visual Studio 1.39.2 on Windows 10. I'm very happy that you can run Jupyter Notebook natively through VS Code as of October this year (2019), but one thing I don't get right is how to set my PYTHONPATH prior to booting up a local Jupyter server. What I want is to be able to import a certain module which is located in another folder (because the module is compiled from C++ code). When I run a normal Python debugging session, I found out that I can set environment variables of the integrated terminal, via the setting terminal.integrated.env.linux. Thus, I set my PYTHNPATH through this option when debugging as normal. But when running a Jupyter Notebook, the local Jupyter server doesn't seem to run in the integrated terminal (at least not from what I can see), so it doesn't have the PYTHONPATH set. My question is then, how can I automatically have the PYTHONPATH set for my local Jupyter Notebook servers in VS Code?
I'm a developer on this extension. If you have a specific path for module resolution we provide a setting for the Jupyter features called: Python->Data Science: Run Startup Commands That setting will run a series of python instructions in any Jupyter session context when starting up. In that setting you could just append that path that you need to sys.path directly and then it will run and add that path every time you start up a notebook or an Interactive Window session.
0.545705
false
1
6,385
2019-11-07 02:14:50.033
Script that opens cmd from spyder
I am working on a text adventure with python and the issue i am having is getting spyder to open a interactive cmd window. so far i have tried os.systems('cmd / k') to try and open this which it did but i could not get any code to run and kept getting an app could not run this file error. my current code runs off a import module that pulls the actual adventure from another source code file. how can i make it to where only one file runs and opens the cmd window to play the text adventure?
(Spyder maintainer here) Cmd windows are hidden by default because there are some packages that open lot of them while running code (e.g. pyomo). To change this behavior, you need to go to Tools > Preferences > IPython console > Advanced settings > Windows adjustments and deactivate the option called Hide command line output windows generated by the subprocess module.
0.201295
false
1
6,386
2019-11-07 08:11:10.283
Retrieving information from a POST without forms in Django
I'm developing something like an API (more like a communications server? Idk what to call it!) to receive data from a POST message from an external app. Basically this other app will encounter an error, then it sends an error ID in a post message to my API, then I send off an email to the affected account. My question is how do I handle this in Django without any form of UI or forms? I want this to pretty much be done quietly in the background. At most a confirmation screen that the email is sent. I'm using a LAMP stack with Python/Django instead of PHP.
A Django view doesn't have to use a form. Everything that was POSTed is there in request.POST which you may access directly. (I commonly do this to see which of multiple submit buttons was clicked). Forms are a good framework for validating the data that was POSTed, but you don't have to use their abilities to generate content for rendering. If the data is validated in the front-end, you can use the form validation framework to check against front-end coding errors and malicious POSTs not from your web page, and simply process the cleaned_data if form.is_valid() and do "Something went wrong" if it didn't (which you believe to be impossible, modulo front-end bugs or malice).
1.2
true
1
6,387
2019-11-07 08:18:43.410
PyFPDF can't add page while specifying the size
on pyfpdf documentation it is said that it is possible to specify a format while adding a page (fpdf.add_page(orientation = '', format = '', same = False)) but it gives me an error when specifying a format. error: pdf.add_page(format = (1000,100)) TypeError: add_page() got an unexpected keyword argument 'format' i've installed pyfpdf via pip install and setup.py install but it doesnt work in both ways how can i solve this?
Your problem is that two packages of pypdf exist, fpdf and fpdf2. They both use from fpdf import FPDF, but only fpdf2 has also a format= keyword in the add_page() method. So you need to install the fpdf2 package.
0.201295
false
1
6,388
2019-11-07 11:18:08.470
Sharing variables between Python subprocesses
I have a Python program named read.py which reads data from serial communication every second, and another python program called calculate.py which has to take the real time values from read.py. Using subprocess.popen('read.py',shell=True) I am able to run read.py from calculate.py May I know how to read or use the value from read.py in calculate.py? Since the value changes every second I am confused how to proceed like, saving value in registers or producer consumer type, etc. for example : from import datetime when ever strftime %s is used, the second value is given how to use the same technique to use variable from another script?
I can suggest writing values to a .txt file for later reading
0.386912
false
1
6,389
2019-11-07 18:11:15.747
Redeploying a Flask app in Google App Engine
How do I redeploy an updated version of the Flask web app in Google App Engine. For example, I have running web app and now there are new features added into it and needs redeployment. How can I do that? Also how to remove the previous version.
Add --no-promote if you want to deploy without routing service to the latest version deployed.
0
false
1
6,390
2019-11-08 13:21:30.283
Select a "mature" curve that best matches the slope of a new "immature" curve
I have a multitude of mature curves (days are plotted on X axis and data is >= 90 days old so the curve is well developed). Once a week I get a new set of data that is anywhere between 0 and 14 days old. All of the data (old and new), when plotted, follows a log curve (in shape) but with different slopes. So some weeks have a higher slope, curve goes higher, some smaller slope, curve is lower. At 90 days all curves flatten. From the set of "mature curves" I need to select the one whose slope matches the best the slope of my newly received date. Also, from the mature curve I then select the Y-value at 90 days and associate it with my "immature"/new curve. Any suggestions how to do this? I can seem to find any info. Thanks much!
This seems more like a mathematical problem than a coding problem, but I do have a solution. If you want to find how similar two curves are, you can use box-differences or just differences. You calculate or take the y-values of the two curves for each x value shared by both the curves (or, if they share no x-values because, say, one has even and the other odd values, you can interpolate those values). Then you take the difference of the two y-values for every x-value. Then you sum up those differences for all x-values. The resulting number represents how different the two curves are. Optionally, you can square all the values before summing up, but that depends on what definition of "likeness" you are using.
0
false
1
6,391
2019-11-09 22:53:51.393
Get N random non-overlapping substrings of length K
Let's say we have a string R of length 20000 (or another arbitrary length). I want to get 8 random non-overlapping sub strings of length k from string R. I tried to partition string R into 8 equal length partitions and get the [:k] of each partition but that's not random enough to be used in my application, and the condition of the method to work can not easily be met. I wonder if I could use the built-in random package to accomplish the job, but I can't think of a way to do it, how can I do it?
You could simply run a loop, and inside the loop use the random package to pick a starting index, and extract the substring starting at that index. Keep track of the starting indices that you have used so that you can check that each substring is non-overlapping. As long as k isn't too large, this should work quickly and easily. The reason I mention the size of k is because if it is large enough, then it could be possible to select substrings that don't allow you to find 8 non-overlapping ones. But that only needs to be considered if k is quite large with respect to the length of the original string.
0
false
1
6,392
2019-11-10 10:49:15.520
How to run previously created Django Project
I am new to Django. I am using Python 3.7 with Django 2.2.6. My Django development environment is as the below. I am using Microsoft Visual Studio Code on a Windows 8.1 computer To give the commands I am using 'DOS Command Prompt' & 'Terminal window' in VS Code. Created a virtual environment named myDjango Created a project in the virtual environment named firstProject Created an app named firstApp. At the first time I could run the project using >python manage.py runserver Then I had to restart my computer. I was able to go inside the previously created virtual environment using workon myDjango command. But my problem is I don't know how to go inside the previously created project 'firstProject' and app 'firstApp' using the 'Command prompt' or using the 'VSCode Terminal window' Thanks and regards, Chiranthaka
Simply navigate to the folder containing the app you want to manage using the command prompt. The first cd should contain your parent folder. The second cd should contain the folder that has your current project. The third cd should be the specific app you want to work on. After that, you can use the python manage.py to keep working on your app.
1.2
true
1
6,393
2019-11-10 15:41:49.253
python baserequesthandler client address is real established ip?
i have a question for you I'm using the udp socket server using baserequesthandler on python I want to protect the server against spoofing - source address changes. Does client_address is the actual ip address of established to server ? If not, how do I get the actual address?
Authenticate the packets so that you know that every message in session X from source address Y is from the same client. By establishing a shared session key which is then used along with a sequence number to produce a hash of the packet keyed by the (sequence, session_key) pair. Which is then included in every packet. This can be done in both directions protecting both the client and server. When you receive a packet you use its source address and the session number to look up the session, then you compute HMAC((sequence, session_key), packet) and check if the MAC field in the message matches. If it doesn't discard the message. This might not be a correct protocol but it is close enough to demonstrate the principle.
0
false
1
6,394
2019-11-10 19:22:04.930
How to use PostgreSQL in Python Pyramid without ORM
Do I need SQLAlchemy if I want to use PostgreSQL with Python Pyramid, but I do not want to use the ORM? Or can I just use the psycopg2 directly? And how to do that?
Even if you do not want to use ORM, you can still use SQLAlchemy's query language. If you do not want to use SQLAlchemy, you can certainly use psycopg2 directly. Look into Pyramid cookbook - MongoDB and Pyramid or CouchDB and Pyramid for inspiration.
0
false
1
6,395
2019-11-10 23:16:41.710
Python 3.7.3 Inadvertently Installed on Mac OS 10.15.1 - Included in Xcode Developer Tools 11.2 Now?
I decided yesterday to do a clean install of Mac OS (as in, erase my entire disk and reinstall the OS). I am on a Macbook Air 2018. I did a clean install of Mac OS 10.15.1. I did this clean install due my previous Python environment being very messy. It was my hope that I could get everything reigned in and installed properly. I've started reinstalling my old applications, and took care to make sure nothing was installed in a weird location. However, when I started setting up VS Code, I noticed that my options for Python interpreters showed 4 options. They are as follows: Python 2.7.16 64-bit, located in /usr/bin/python Python 2.7.16 64-bit, located in /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Python 3.7.3 64-bit, located in /user/bin/python Python 3.7.3 64-bit, located in /Library/Developer/CommandLineTools/usr/bin/python3 In terminal, if I enter where python python3 it returns /usr/bin/python /usr/bin/python3. How in the world did python3 get there? My only idea is that it now is included in the Xcode Developer Tools 11.2 package, as I did install that. I cannot find any documentation of this inclusion. Any ideas how this got here? More importantly, how do I remove it? I want to use Homebrew for all of my installation needs. Also, why does VS Code show 4 options? Thanks!
The command line tool to run the python 2.7 environment is at /usr/bin/python, but the framework and dependencies for it are in /System. This includes the Python.app bundle, which is just a wrapper for scripts that need to run using the Mac's UI environment. Although these files are separate executables, it's likely that they point to the same environment. Every MacOS has these. Catalina does indeed also include /usr/bin/python3 by default. The first time you run it, the OS will want to download Xcode or the Command line tools to install the 'complete' python3. So these pair are also the same environment. I don't think you can easily remove these, due to the security restrictions on system files in Catalina. Interestingly, Big Sur still comes with python2 !
1.2
true
1
6,396
2019-11-11 00:39:28.400
How to access MySQL database that is on another machine, located in a different locations (NOT LOCAL) with python
I am finished with my project and now I want to put it on my website where people could download it and use it. My project is connected to my MySQL and it works on my machine. On my machine, I can read, and modify my database with python. It obviously, will not work if a person from another country tries to access it. How can I make it so a person from another town, city, or country could access my database and be able to read it? I tried using SSH but I feel like it only works on a local network. I have not written a single line of code on this matter because I have no clue how to get started. I probably know how to read my database on a local network but I have no clue how to access it from anywhere else. Any help, tips, or solutions would be great and appreciated. Thank you!
If I'm understanding correctly, you want to run a MySQL server from your home PC and allow others to connect and access data? Well, you would need to make sure the correct port is forwarded in your router and firewall, default is TCP 3306. Then simply provide the user with your current IP address (could change). Determine the correct MySQL Server port being listened on. Allow port forwarding on the TCP protocol and the port you determined, default is 3306. Allow incoming connections on this port from software firewall if any. Provide the user with your current IP Address, Port, and Database name. If you set login credentials, make sure the user has this as well. That's it. The user should be able to connect with the IP Address, Port, Database Name, Username, and Password.
0
false
1
6,397
2019-11-12 04:54:01.377
Is there a dynamic scheduling system for better implementation of a subscription based payment system?
I was making a subscription payment system from scratch in python in Django. I am using celery-beat for a scheduled task with RabbitMQ as a queue broker. django_celery_beat uses DatabaseScheduler which is causing problems. Takes a long time to dispatch simple-task to the broker. I was using it to expire users. For some expiration tasks, it took around 60 secs - 150secs. But normally it used to take 100ms to 500ms. Another problem is that, while I re-schedule some task, while it is being written into the database it blocks the scheduler for some bizarre reason and multiple tasks are missed because of that. I have been looking into Apache Airflow because it is marketed as an industry-standard scheduling solution. But I don't think, it is applicable and feasible for my small project. If you have worked and played with a subscription payment system, can you advise me how to go forward with this?
I have a long winding solution that I implemented for a similar project. First I save the schedule as a model in the database Next I have a cron job that gets the entries that need to be run for that day Then schedule those jobs as normal Celery jobs by setting the ETA based on the time set in the schedule model. This way Celery just runs off the messages from Redis in my case. Try it if don't get a direct answer.
0
false
1
6,398
2019-11-12 12:52:29.447
How do I customise the flask-user registration and login functions?
I want to customise the functions that process the results of completing the flask-user registration and login forms. I know how to customise the html forms themselves, but I want to change how flask-user performs the registration process. For example, I want to prevent the flask-user login and registration process from creating flash messages and I want registration to process a referral code. I understand how to add an _after_registration_hook to perform actions after the registration function has completed, but, this doesn't allow me to remove the flash messages that are created in the login and registration processes. My custom login and registration processes would build on the existing flask-user login and registration functions with functionality added or removed.
You seem to be asking about the flask-user package - however you tagged this with flask-security (which is a different package but offers similar functionality). I can answer for flask-security-too (my fork of the original flask-security) - if you are/want to use flask-user - it might be useful to change your tags. In a nutshell - for flask-security - you can turn off ALL flashes with a config variable. For registration - you can easily override the form and add a referral code - and validate/process that as part of form validation.
0
false
1
6,399
2019-11-12 13:47:52.050
How I can debug two-language program?
I use Python as a high-level wrapper and a loaded C++ kernel in the form of a binary library to perform calculations. I debug high level Python code in IDE Eclipse in the usual way, but how do I debug C++ code? Thank you in advance for your help.
Try using gdb's "attach " command (or "gdb -p " command-line option) to attach to the python process that has the C++ kernel library loaded.
0.386912
false
1
6,400
2019-11-13 08:29:13.933
Why my Python command doesn't work in Windows 10 CMD?
I have added the C:\Users\Admin\Anaconda3\python.exe path to my system environment variables PATH but still when I run python command it opens Windows app store! Why bthis happens and how to fix it?
the PATH variable should contain C:\Users\Admin\Anaconda3 not C:\Users\Admin\Anaconda3\python.exe
1.2
true
1
6,401
2019-11-13 11:59:54.057
Automate downloading of certain csv files from a website
I am trying to automate downloading of certain csv files from a website. This is how I manually do it: I log in to the website. Click on the button export as csv. The file gets downloaded. The problem is the button does not have any link to it so I was not able to automate it using wget or requests.
You can use selenium in python. There is an option to click using "link text" or "partial link text". It is quite easy and efficient. driver.findElement(By.linkText("click here")).click() It kind of looks like this.
0
false
1
6,402
2019-11-13 12:45:22.860
Python does not work after installing fastai and boto3. Permission denied // Python was not found
Last thing I did was pip install boto3 and fastai through git bash yesterday. I can't imagine if anything else could have had any influence. I have been using python for a few months now, but today it stopped running. I opened my Sublime Text - and after running some simple code I got: "Python was not found but can be installed from the Microsoft Store". GIT bash: $ python --version bash: /c/Users/.../AppData/Local/Microsoft/WindowsApps/python: Permission denied But if I open up a file of python 3 in this link: C:\Users...\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Python 3.7 My Python works. I think I have to redirect my main python file from the first link directory to the second and have no clue how to do this, that my Git and Sublime would be able to pick on this.
So, I gave up and just installed the recommended link from Microsoft store. So now I possibly have 4 pythons with 2 different versions in 3 locations, but hey.... it works :) Regarding a comment below my first questions: When I run $ ls -l which python in GITbash, it gives: -rwxr-xr-x 1 ... 197121 97296 Mar 25 2019 /c/Users/.../AppData/Local/Programs/Python/Python37-32/python* /.../ is just my user name Yesterday I tried that as well, the start was identical, although I can't really remember the link, if it was the same.
0
false
1
6,403
2019-11-13 13:05:07.793
Is there a way to change TCP settings for Django project?
I have been working on a project built with Django. When I run profiler due to slowness of a page in project, this was a line of the result: 10 0.503 0.050 0.503 0.050 {method 'recv_into' of '_socket.socket' objects} Which says almost 99% of passed time was for the method recv_into(). After some research, I learned the reason is Nagel's algorithm which targets to send packets only when the buffer is full or there are no more packets to transmit. I know I have to disable this algorithm and use TCP_NODELAY but I don't know how, also it should only affect this Django project. Any help would be much appreciated.
Are you using cache settings in the settings.py file? Please check that maybe you have tcp_nodelay enable there, if so then remove it or try to clear browser cache.
-0.101688
false
1
6,404
2019-11-13 14:01:57.823
How does this Fibonacci Lambda function work?
Am a beginner on Python (self studying) and got introduced to Lambda (nameless) function but I am unable to deduce the below expression for Fibonacci series (got from Google) but no explanation available online (Google) as to how this is evaluated (step by step). Having a lot of brain power here, I thought somebody can help me with that. Can you help evaluate this step by step and explain ? lambda n: reduce(lambda x, _: x+[x[-1]+x[-2]],range(n-2), [0, 1]) Thanks in advance. (Thanks xnkr, for the suggestion on a reduce function explained and yes, am able to understand that and it was part of the self training I did but what I do not understand is how this works for lambda x, _ : x+[x[-1]+x[-2]],range(n-2), [0, 1]. It is not a question just about reduce but about the whole construct - there are two lambdas, one reduce and I do not know how the expression evaluates to. What does underscore stand for, how does it work, etc) Can somebody take the 2 minutes that can explain the whole construct here ?
Break it down piece by piece: lambda n: - defines a function that takes 1 argument (n); equivalent to an anonymous version of: def somefunc(n): reduce() - we'll come back to what it does later; as per docs, this is a function that operates on another function, an iterable, and optionally some initial value, in that order. These are: A) lambda x, _: - again, defines a function. This time, it's a function of two arguments, and the underscore as the identifier is just a convention to signal we're not gonna use it. B) X + [ <stuff> ] - prepend some list of stuff with the value of the first arg. We already know from the fact we're using reduce that the arg is some list. C) The <stuff> is x[-1] + x[-2] - meaning the list we're prepending our X to is, in this case, the sum of the last two items already in X, before we do anything to X in this iteration. range(n-2) is the iterable we're working on; so, a list of numbers from 1 to N-2. The -2 is here because the initial value (in 3) already has the first two numbers covered. Speaking of which, [0, 1] is our predefined first two starting values for X[-2], X[-1]. And now we're executing. reduce() takes the function from (1) and keeps applying it to each argument supplied by the range() in (2) and appending the values to a list initialized as [0, 1] in (3). So, we call I1: [0, 1] + lambda 0, [0, 1], then I2: I1 + lambda 1, I1, then I3: I2 + lambda 2, I2 and so on.
1.2
true
1
6,405
2019-11-13 16:38:11.820
Printing top few lines of a large JSON file in Python
I have a JSON file whose size is about 5GB. I neither know how the JSON file is structured nor the name of roots in the file. I'm not able to load the file in the local machine because of its size So, I'll be working on high computational servers. I need to load the file in Python and print the first 'N' lines to understand the structure and Proceed further in data extraction. Is there a way in which we can load and print the first few lines of JSON in python?
You can use the command head to display the N first line of the file. To get a sample of the json to know how is it structured. And use this sample to work on your data extraction. Best regards
-0.201295
false
1
6,406
2019-11-14 00:50:56.010
get json data from host that requires headers={'user-agent', 'cookie', x-xsrf-token'}
There is a server that contains a json dataset that I need I can manually use chrome to login to the url and use chrome developer tool to read the request header for said json data I determined that the minimum required headers that should be sent to the json endpoint are ['cookie', 'x-xsrf-token', 'user-agent'] I don't know how I can get these values so that I can automate fetching this data. I would like to use request module to get the data I tried using selenium, to navigate to the webpage that exposes these header values, but cannot get said headers values (not sure if selenium supports this) Is there a way for me to use request module to inch towards getting these header values...by following the request header "bread crumbs" so to speak? Is there an alternative module that excels at this? To note, I have used selenium to get the required datapoints successfully, but selenium is resource heavy and prone to crash; By using the request module with header values greatly simplifies the workflow and makes my script reliable
Based on pguardiario's comment Sessions cookies and csrf-token are provided by the host when a request is made against the Origin url. These values are needed to make subsequent requests against the endpoint with the JSON payload. By using request.session() against the Origin url, and then updating the header when using request.get(url, header). I was able to access the json data
1.2
true
1
6,407
2019-11-15 04:01:44.763
Regex Match for Non Hyphenated Words - Python
I am trying to create a regex expression in Python for non-hyphenated words but I am unable to figure out the right syntax. The requirements for the regex are: It should not contain hyphens AND It should contain atleast 1 number The expressions that I tried are:= ^(?!.*-) This matches all non-hyphenated words but I am not able to figure out how to additionally add the second condition. ^(?!.*-(?=/d{1,})) I tried using double lookahead but I am not sure about the syntax to use for it. This matches ID101 but also matches STACKOVERFLOW Sample Words Which Should Match: 1DRIVE , ID100 , W1RELESS Sample Words Which Should Not Match: Basically any non-numeric string (like STACK , OVERFLOW) or any hyphenated words (Test-11 , 24-hours) Additional Info: I am using library re and compiling the regex patterns and using re.search for matching. Any assistance would be very helpful as I am new to regex matching and am stuck on this for quite a few hours.
I came up with - ^[^-]*\d[^-]*$ so we need at LEAST one digit (\d) We need the rest of the string to contain anything BUT a - ([^-]) We can have unlimited number of those characters, so [^-]* but putting them together like [^-]*\d would fail on aaa3- because the - comes after a valid match- lets make sure no dashes can sneak in before or after our match ^[-]*\d$ Unfortunately that means that aaa555D fails. So we actually need to add the first group again- ^[^-]*\d[^-]$ --- which says start - any number of chars that aren't dashes - a digit - any number of chars that aren't dashes - end Depending on style, we could also do ^([^-]*\d)+$ since the order of the digits/numbers dont matter, we can have as many of those as we want. However, finally... this is how I would ACTUALLY solve this particular problem, since regexes may be powerful, but they tend to make the code harder to understand... if ("-" not in text) and re.search("\d", text):
0.386912
false
1
6,408
2019-11-15 07:07:23.633
How do I link to a specific page of a PDF document inside a cell in Excel?
I am writing a python code which writes a hyperlink into a excel file.This hyperlink should open in a specific page in a pdf document. I am trying something like Worksheet.write_url('A1',"C:/Users/...../mypdf#page=3") but this doesn't work.Please let me know how this can be done.
Are you able to open the pdf file directly to a specific page even without xlsxwriter? I can not. From Adobe's official site: To target an HTML link to a specific page in a PDF file, add #page=[page number] to the end of the link's URL. For example, this HTML tag opens page 4 of a PDF file named myfile.pdf: Note: If you use UNC server locations (\servername\folder) in a link, set the link to open to a set destination using the procedure in the following section. If you use URLs containing local hard drive addresses (c:\folder), you cannot link to page numbers or set destinations.
0.386912
false
1
6,409
2019-11-16 04:57:26.760
Using import in Python
So I’m a new programmer and starting to use Python 3, and I see some videos of people teaching the language and use “import”. My question is how they know what to import and where you can see all the things you can import. I used import math in one example that I followed along with, but I see other videos of people using import JSON or import random, and I’m curious how they find what they can import and how they know what it will do.
In all programming languages, whenever you actually need a library, you should import it. For example, if you need to generate a random number, search for this function in your chosen programming language, find the appropriate library, and import it into your code.
0.135221
false
1
6,410
2019-11-16 05:22:32.853
Spyder Editor - How to Disable Auto-Closing Brackets
Does anyone know how to make Spyder stop automatically inserting closing brackets? It often results in complete mess when you have multiple levels of different brackets. I had a look around and could only find posts about auto-closing quotes, but I'm not really interested in these. But those brackets are making me slightly miserable. I had a look in Preferences but the closest I could find is 'Automatic code completion'. But I certainly don't want all of it off especially when working with classes.
In Spyder 4 and Spyder 5 go to: Tools - Preferences - Editor - Source code and deselect the following items: Automatic insertion of parentheses, braces and brackets Automatic insertion of closing quotes (since it's the same nuisance than with brackets)
1.2
true
1
6,411
2019-11-17 00:01:23.980
Get all keys in a hash-table that satisfy certain arithmetic property
Let's say I have a Hash-table, each key is defined as a tuple with 4 integers (A, B, C, D). where are integers represent a quantity of a certain attribute, and its corresponding value is a tuple of gears that satisfy (A, B, C, D). I wanted to write a program that do the following: with any given attribute tuple (x, y, z, w), I want to find all the keys satisfying (|A - x| + |B - y| + |C - z| + |D - w|) / 4 <= i where i is a user defined threshold; return the value of these keys if exist and do some further calculation. (|A - x| means the absolute value of A - x) To my experience, this kind of thing can be better done with Answer set programming, Haskell, Prolog and all this kind of logical programming languages, but I'm forced to use python for this is a python project... I can hard code for a particular "i" but I really have no idea how to do this for arbitrary integers. please tell me how I can do this in pure python, Thank you very much!!!!
Just write a function that loops over all values in the table and checks them one by one. The function will take the table and i as arguments.
1.2
true
1
6,412
2019-11-18 00:40:56.233
How to create a different workflow depending on result of last run?
I am trying to accomplish the following task using Airflow. I have an address and I want to run 3 different tasks taskA, taskB, taskC. Each task returns True if the address was detected. Store the times when each of the functions detected the address. I want to accomplish the below logic. Run all three tasks to start off with. If any of them return True, store the current time. Wait for 1 minute and rerun only the tasks that did not return True. If all have returned True end the job. I am not sure how I can accomplish selectively running only those tasks that returned False from the last run. I have so far looked at the BranchPythonOperator but I still haven't been able to accomplish the desired result.
You can get last run status value from airflow db.
0
false
1
6,413
2019-11-18 18:32:02.663
How to allow my computer to download .py files from an email
I am unable to download a .py file from an email. I get the error "file not supported." The file was saved from a Jupyter-notebook script. I have Python 3.6.6 and Jupyter downloaded on my Windows 10 laptop and tried to access the file through Chrome and through my computer's email app, but this didn't resolve the problem. Any ideas on how to make the file compatible with my computer? EDIT: I had to have the .ipynb file sent rather than the .py file.
Generally email providers block any thing which can possibly execute on clients machine. Best option to share will be transfer via email will be .py.txt or any cloud drives.
0
false
1
6,414
2019-11-19 12:15:40.287
Use of views in jam.py framework
for an academic project, I am currently using the python framework jam.py 5.4.83 to develop a back office for a new company. I would like to use views instead of tables for reporting but I don't find how to do it, I can only import data from tables. So if someone already used this framework, I would be very thankful. Regards, Yoan
the use of database views are not supported in Jam.py However, you can import tables as read only if used for reporting. Than you can build Reports as you would. Good luck.
0
false
1
6,415
2019-11-19 15:26:28.493
How to find the intersecting area of two sub images using OpenCv?
Let's say there are two sub images of a large image. I am trying to detect the overlapping area of two sub images. I know that template matching can help to find the templates. But i'm not sure how to find the intersected area and remove them in either one of the sub images. Please help me out.
MatchTemplate returns the most probable position of a template inside a picture. You could do the following steps: Find the (x,y) origin, width and height of each picture inside the larger one Save them as rectangles with that data(cv::Rect r1, cv::Rect r2) Using the & operator, find the overlap area between both rectangles (r1&r2)
0.201295
false
1
6,416
2019-11-19 19:57:43.280
How to scale numpy matrix in Python?
I have this numpy matrix: x = np.random.randn(700,2) What I wanna do is scale the values of the first column between that range: 1.5 and 11 and the values of the second column between `-0.5 and 5.0. Does anyone have an idea how I could achieve this? Thanks in advance
subtract each column's minimum from itself for each column of the result divide by its maximum for column 0 of that result multiply by 11-1.5 for column 1 of that result multiply by 5--0.5 add 1.5 to column zero of that result add -0.5 to column one of that result You could probably combine some of those steps.
0.386912
false
1
6,417
2019-11-20 13:02:15.587
zsh: command not found: import
I'm using MAC OS Catalina Version 10.15.1 and I'm working on a python project. Every time I use the command "import OS" on the command line Version 2.10 (433), I get this message: zsh: command not found: import. I looked up and followed many of the solutions listed for this problem but none of them have worked. The command worked prior to upgrading my MAC OS. Any suggestion on how to fix it?
Don't capitalize it. import os
0
false
2
6,418
2019-11-20 13:02:15.587
zsh: command not found: import
I'm using MAC OS Catalina Version 10.15.1 and I'm working on a python project. Every time I use the command "import OS" on the command line Version 2.10 (433), I get this message: zsh: command not found: import. I looked up and followed many of the solutions listed for this problem but none of them have worked. The command worked prior to upgrading my MAC OS. Any suggestion on how to fix it?
The file is being interpreted as zsh, not a python. I suggest you to add this to the first line: #!/usr/bin/env python
0.201295
false
2
6,418
2019-11-20 16:21:07.560
Starting conda pompt from cmd
I want to start the conda Prompt from cmd, because I want to use the promt as a terminal in Atom.io. There is no Conda.exe and the path to conda uses cmd to jump into the prompt. But how do I start it inside of cmd?
I guess what you want is to change to Anaconda shell using cmd, you can find the address for your Anaconda and run the following in your cmd: %windir%\System32\cmd.exe "/K" "Address"\anaconda3 Or, you can find your Anaconda prompt shortcut, right click on that, and open its properties window. In the properties window, find Target. Then, copy the whole thing in Target and paste it into your cmd.
0.201295
false
1
6,419
2019-11-20 19:03:29.667
Prevent internal POST methods to be called by third parties
I'm worried about the security of my web app, I'm using Django and sometimes I use AJAX to call a Django url that will execute code and then return an HttpResponse with a message according the result, the user never notice this as it's happening in background. However, if the url I'm calling with AJAX is, for example, "runcode/", and the user somehow track this and try to send a request to my domain with that url (something like "www.example.com/runcode/"), it will not run the code as Django expects the csrf token to be send too, so here goes the question. It is possible that the user can obtain the csrf token and send the POST?, I feel the answer for that will be "yes", so anyone can help me with a hint on how to deny these calls if they are made without the intended objective?
Not only django but this behavior is common in all others, You can only apply 2 solution, Apply CORS and just allow your domain, to block other domain to access data from your API response, but this will not effective if a user direct call your API end-point. As lain said in comment, If data is sensitive or user's personal, add authentication in API. Thanks
0
false
1
6,420
2019-11-21 15:43:05.370
Looping through webelements with selenium Python
I am currently trying to automate a process using Selenium with python, but I have hit a roadblock with it. The list is part of a list which is under a tree. I have identified the base of the tree with the following xpath item = driver.find_element_by_xpath("//*[@id='filter']/ul/li[1]//ul//li") items = item.find_elements_by_tag_name("li") I am trying to Loop through the "items" section but need and click on anything with an "input" tag for k in items: WebDriverWait(driver, 10).until(EC.element_to_be_clickable((k.find_element(By.TAG_NAME, "input")))).click() When execute the above I get the following error: "TypeError: find_element() argument after * must be an iterable, not WebElement" For some reason .click() will not work if I use something like the below. k.find_element_by_tag_name("input").click() it only works if i use the webdriverwait. I have had to use the web driver wait method anytime i needed to click something on the page. My question is: What is the syntax to replicate items = item.find_elements_by_tag_name("li") for WebDriverWait(driver, 10).until(EC.element_to_be_clickable((k.find_element(By.TAG_NAME, "input")))).click() i.e how do I use a base path and append to the using the private methods find_elements(By.TAG_NAME) Thanks in advance
I have managed to find a work around and get Selenium to do what i need. I had to call the javascript execution, so instead of trying to get WebDriverWait(driver, 10).until(EC.element_to_be_clickable((k.find_element(By.TAG_NAME, "input")))).click() to work, i just used driver.execute_script("arguments[0].click();", k.find_element_by_tag_name("input")) Its doing exactly what I needed it to do.
1.2
true
1
6,421
2019-11-23 19:16:37.407
How can I update the version of SQLite in my Flask/SQLAlchemy App?
I wish to use the latest version of SQLite3 (3.30.1) because of its new capability to handle SQL 'ORDER BY ... ASC NULLS LAST' syntax as generated by the SQLAlchemy nullslast() function. My application folder env\Scripts contains the existing (old) version of sqlite3.dll (3.24), however when I replace it, there is no effect. In fact, if I rename that DLL, the application still works fine with DB accesses. So, how do I update the SQLite version for an application? My environment: Windows 10, 64-bit (I downloaded a 64-bit SQlite3 DLL version). I am running with pyCharm, using a virtual env.
I have found that the applicable sqlite3.dll is determined first by a Windows OS defined lookup. It first goes through the PATH variable, finding and choosing the first version it finds in any of those paths. In this case, probably true for all pyCharm/VirtualEnv setups, a version found in my user AppData\Local\Programs\Python\Python37\DLLs folder was selected. When I moved that out of the way, it was able to find the version in my env\Scripts folder, so that the upgraded DLL was used, and the sQLAlchemy nullslast() function did its work.
1.2
true
1
6,422
2019-11-25 12:59:29.660
Python and Telethon: how to handle sw distribution
I developed a program to interact between Telegram and other 3rd party Software. It's written in Python and I used the Telethon library. Everything works fine, but since it uses my personal configuration including API ID, API hash, phone number and username, I would like to know how to handle all of this if I wanted to distribute the software to other people. Of course they can't use my data, so should they login into Telegram development page and get all the info? Or, is there a more user-friendly way to do it?
Since the API ID and the API Hash in Telegram are supposed to be distributed with your client all you need to do is prompt the user for their Phone Number. You could do this using a GUI Library (like PySide2 using QInputDialog) or if it is a command line application using input(). Keep in mind that the user will also need a way to enter the code they receive from Telegram and their 2FA Password if set.
1.2
true
1
6,423
2019-11-25 18:14:02.303
Pyarmor Pack Python File Check Restrict Mode Failed
So i am trying to Pack my python script with pyarmor pack however, when i pack the script it does not work, it throws check restrict mode failed. If i Obfuscate the script normally with pyarmor obfuscate instead of pack the script it works fine, and is obfuscated fine. This version runs no problem. Wondering how i can get pack to work as i want my python file in an exe I have tried to compile the obfuscated script with pyinstaller however this does not work either Wondering what else i can try?
I had this problem, fixed by adding --restrict=0 For example: pyarmor obfuscate --restrict=0 app.py
0.386912
false
1
6,424
2019-11-26 14:11:58.723
how to find cosine similarity in a pre-computed matrix with a new vector?
I have a dataframe with 5000 items(rows) and 2048 features(columns). Shape of my dataframe is (5000, 2048). when I calculate cosine matrix using pairwise distance in sklearn, I get (5000,5000) matrix. Here I can compare each other. But now If I have a new vector shape of (1,2048), how can find cosine similarity of this item with early dataframe which I had, using (5000,5000) cosine matrix which I have already calculated? EDIT PS: I can append this new vector to my dataframe and calculate again cosine similarity. But for large amount of data it gets slow. Or is there any other fast and accurate distance metrics?
Since cosine similarity is symmetric. You can compute the similarity meassure with the old data matrix, that is similarity between the new sample (1,2048) and old matrix (5000,2048) this will give you a vector of (5000,1) you can append this vector into the column dimension of the pre-computed cosine matrix making it (5000,5001) now since you know the cosine similarity of the new sample to itself. you can append this similarity to itself, back into the previously computed vector making it of size (5001,1), this vector you can append in the row dimension of the new cosine matrix that will make it (5001,5001)
0
false
2
6,425
2019-11-26 14:11:58.723
how to find cosine similarity in a pre-computed matrix with a new vector?
I have a dataframe with 5000 items(rows) and 2048 features(columns). Shape of my dataframe is (5000, 2048). when I calculate cosine matrix using pairwise distance in sklearn, I get (5000,5000) matrix. Here I can compare each other. But now If I have a new vector shape of (1,2048), how can find cosine similarity of this item with early dataframe which I had, using (5000,5000) cosine matrix which I have already calculated? EDIT PS: I can append this new vector to my dataframe and calculate again cosine similarity. But for large amount of data it gets slow. Or is there any other fast and accurate distance metrics?
The initial (5000,5000) matrix encodes the similarity values of all your 5000 items in pairs (i.e. symmetric matrix). To have the similarities in case of a new item, concatenate and make a (5001, 2048) matrix and then estimate similarity again to get (5001,5001) In other words, you can not directly use the (5000,5000) precomputed matrix to get the similarity with the new (1,2048) vector.
0
false
2
6,425
2019-11-27 09:52:59.383
SQLITE3 / Python - Database disk image malformed but integrity_check ok
My actual problem is that python sqlite3 module throws database disk image malformed. Now there must be a million possible reasons for that. However, I can provide a number of clues: I am using python multiprocessing to spawn a number of workers that all read (not write) from this DB The problem definitely has to do with multiple processes accessing the DB, which fails on the remote setup but not on the local one. If I use only one worker on the remote setup, it works The same 6GB database works perfectly well on my local machine. I copied it with git and later again with scp to remote. There the same script with the copy of the original DB gives the error Now if I do PRAGMA integrity_check on the remote, it returns ok after a while - even after the problem occurred Here are the versions (OS are both Ubuntu): local: sqlite3.version >>> 2.6.0, sqlite3.sqlite_version >>> 3.22.0 remote: sqlite3.version >>> 2.6.0, sqlite3.sqlite_version >>> 3.28.0 Do you have some ideas how to allow for save "parallel" SELECT?
The problem was for the following reason (and it had happened to me before): Using multiprocessing with sqlite3, make sure to create a separate connection for each worker! Apparently this causes problems with some setups and sometimes doesn't.
0
false
1
6,426
2019-11-28 11:25:12.167
Fail build if coverage lowers
I have GitHub Actions that build and test my Python application. I am also using pytest-cov to generate a code coverage report. This report is being uploaded to codecov.io. I know that codecov.io can't fail your build if the coverage lowers, so how do I go about with GitHub Actions to fail the build if the coverage drops? Do I have to check the previous values and compare with the new "manually" (having to write a script)? Or is there an existing solution for this?
There is nothing built-in, instead you should use one of the many integrations like sonarqube, if I don’t want to write a custom script.
0
false
1
6,427
2019-12-01 04:16:36.060
what's command to list all the virtual environments in venv?
I know in conda I can use conda env list to get a list of all conda virtual environments, what's the corresponding command in python venv that can list all the virtual environments in a given venv? also, is there any way I can print/check the directory of current venv? somehow I have many projects that have same name .venv for their virtual environment and I'd like to find a way to verify which venv I'm in. Thanks
Virtual environments are simple a set of files in a directory on your system. You can find them the same way you would find images or documents with a certain name. For example, if you are using Linux or macOS, you could run find / | grep bin/activate in terminal. Not too sure about Windows but I suspect you can search for something similar in Windows Explorer.
0
false
2
6,428
2019-12-01 04:16:36.060
what's command to list all the virtual environments in venv?
I know in conda I can use conda env list to get a list of all conda virtual environments, what's the corresponding command in python venv that can list all the virtual environments in a given venv? also, is there any way I can print/check the directory of current venv? somehow I have many projects that have same name .venv for their virtual environment and I'd like to find a way to verify which venv I'm in. Thanks
I'm relatively new to python venv as well. I have found that if you created your virtual environment with python -m venv <yourvenvname> with in a project folder. I'm using windows, using cmd, for example, you have a Dash folder located in C:\Dash, when you created a venv called testenv by python -m venv testenv, you can activate the virtual environment by just input C:\Dash\testenv\Scripts\activate, then you can deactivate it by just type in deactivate. If you want to list the venv that you have, you go to the C:\Dash folder. type dir in cmd, it will list the list of the virtual env you have, similar to conda env list. if you want to delete that virtual env, simply do rm -rf testenv you can list the packages installed within that venv by doing pip freeze. I hope this helps. please correct me if I'm wrong.
0.201295
false
2
6,428
2019-12-02 14:13:17.757
Localization with RPLider (Python)
we are currently messing around with an Slamtec RPLidar A1. We have a robot with the lidar mounted on it. Our aim is to retrieve a x and y position in the room. (it's a closed room, could have more than 4 corners but the whole room should be recognized once (it does not matter where the RPlidar stands). Anyway the floormap is not given. We got so far a x and y position with BreezySLAM but we recognized, wherever the RPlidar stands, it always sees itself as center, so we do not really know how to retrieve correct x and y from this information. We are new to this topic and maybe someone can give us a good hint or link to find a simple solution. PS: We are not intending to track the movement of the robot.
Any sensor sees itself in the center of the environment. The idea of recording map is good one, if not. You can presume that any of the corners can be your zero points, you your room is not square, you can measure length of the wall and and you down to 2 points. Unfortunately if you don't have any additional markers in the environment or you can't create map before actual use, I'm afraid there is no chance for robot to correctly understand where is desired (0,0) point is.
0
false
1
6,429
2019-12-02 17:35:56.597
A function possible inputs in PYTHON
How can I quickly check what are the possible inputs to a specific function? For example, I want to plot a histogram for a data frame: df.hist(). I know I can change the bin size, so I know probably there is a way to give the desired bin size as an input to the hist() function. If instead of bins = 10 I use df.hist(bin = 10), Python obviously gives me an error and says hist does not have property bin. I wonder how I can quickly check what are the possible inputs to a function.
Since your question tag contains jupyter notebook I am assuming you are trying on it. So in jupyter notebook 2.0 Shift+Tab would give you function arguments.
0.296905
false
1
6,430
2019-12-03 02:24:09.220
How to send data from Python script to JavaScript and vice-versa?
I am trying to make a calculator (with matrix calculation also). I want to make interface in JavaScript and calculation stuff in Python. But I don't know how to send parameters from python to JavaScript and from JavaScript to python. Edit: I want to send data via JSON (if possible).
You would have to essentially set both of them up as API's and access them via endpoints. For Javascript, you can use node to set up your API endpoint, and for Python use Flask.
0.201295
false
1
6,431
2019-12-03 05:29:29.123
Is there any operator in Python to check and compare the type and value?
I know that other languages like Javascript have a == and === operator also they have a != and !== operator does Python also has a === and !== (ie. a single operator that checks the type and compares the value at the same time like === operator) and if not how can we implement it.
No, and you can't really implement it yourself either. You can check the type of an object with type, but if you just write a function that checks type(x) is type(y) and x == y, then you get results like [1] and [1.0] showing up as equivalent. While that may fulfill the requirements you stated, I've never seen a case where this wasn't an oversight in the requirements. You can try to implement your own deep type-checking comparison, but that requires you to know how to dig into every type you might have to deal with to perform the comparison. That can be done for the built-in container types, but there's no way to make it general. As an aside, is looks vaguely like what you want if you don't know what is does, but it's actually something entirely different. is checks object identity, not type and value, leading to results like x = 1000; x + 1 is not 1001.
0.081452
false
1
6,432
2019-12-04 11:32:33.307
Using python and spacy text summarization
Basically i am trying to do text summarize using spacy and nltk in python. Now i want to summarize the normal 6-7 lines text and show the summarized text on the localhost:xxxx so whenever i run that python file it will show on the localhost. Can anyone tell is it possible or not and if it is possible how to do this. Since there would be no databse involved.
You have to create a RESTFUL Api using FLASK or DJAngo with some UI elements and call your model. Also you can use displacy( Spacy UI Bro) Directly on your system.
1.2
true
1
6,433
2019-12-05 06:08:12.167
Find the maximum result after collapsing an array with subtractions
Given an array of integers, I need to reduce it to a single number by repeatedly replacing any two numbers with their difference, to produce the maximum possible result. Example1 - If I have array of [0,-1,-1,-1] then performing (0-(-1)) then (1-(-1)) and then (2-(-1)) will give 3 as maximum possible output Example2- [3,2,1,1] we can get maximum output as 5 { first (1-1) then (0-2) then (3-(-2)} Can someone tell me how to solve this question?
The other answers are fine, but here's another way to think about it: If you expand the result into individual terms, you want all the positive numbers to end up as additive terms, and all the negative numbers to end up as subtractive terms. If you have both signs available, then this is easy: Subtract all but one of the positive numbers from a negative number Subtract all of the negative numbers from the remaining positive number If all your numbers have the same sign, then pick the one with the smallest absolute value at treat it as having the opposite sign in the above procedure. That works out to: If you have only negative numbers, then subtract them all from the least negative one; or If you have only positive numbers, then subtract all but one from the smallest, and then subtract the result from the remaining one.
0
false
1
6,434
2019-12-05 17:41:59.977
Combining logistic and continuous regression with scikit-learn
In my dataset X I have two continuous variables a, b and two boolean variables c, d, making a total of 4 columns. I have a multidimensional target y consisting of two continuous variables A, B and one boolean variable C. I would like to train a model on the columns of X to predict the columns of y. However, having tried LinearRegression on X it didn't perform so well (my variables vary several orders of magnitude and I have to apply suitable transforms to get the logarithms, I won't go into too much detail here). I think I need to use LogisticRegression on the boolean columns. What I'd really like to do is combine both LinearRegression on the continuous variables and LogisticRegression on the boolean variables into a single pipeline. Note that all the columns of y depend on all the columns of X, so I can't simply train the continuous and boolean variables independently. Is this even possible, and if so how do I do it?
If your target data Y has multiple columns you need to use multi-task learning approach. Scikit-learn contains some multi-task learning algorithms for regression like multi-task elastic-net but you cannot combine logistic regression with linear regression because these algorithms use different loss functions to optimize. Also, you may try neural networks for your problem.
0
false
2
6,435
2019-12-05 17:41:59.977
Combining logistic and continuous regression with scikit-learn
In my dataset X I have two continuous variables a, b and two boolean variables c, d, making a total of 4 columns. I have a multidimensional target y consisting of two continuous variables A, B and one boolean variable C. I would like to train a model on the columns of X to predict the columns of y. However, having tried LinearRegression on X it didn't perform so well (my variables vary several orders of magnitude and I have to apply suitable transforms to get the logarithms, I won't go into too much detail here). I think I need to use LogisticRegression on the boolean columns. What I'd really like to do is combine both LinearRegression on the continuous variables and LogisticRegression on the boolean variables into a single pipeline. Note that all the columns of y depend on all the columns of X, so I can't simply train the continuous and boolean variables independently. Is this even possible, and if so how do I do it?
What i understand you want to do is to is to train a single model that both predicts a continuous variable and a class. You would need to combine both loses into one single loss to be able to do that which I don't think is possible in scikit-learn. However I suggest you use a deep learning framework (tensorflow, pytorch, etc) to implement your own model with the required properties you need which would be more flexible. In addition you can also tinker with solving the above problem using neural networks which would improve your results.
0
false
2
6,435
2019-12-06 14:08:01.030
How to build whl package for pandas?
Hi I have a built up Python 2.7 environment with Ubuntu 19.10. I would like to build a whl package for pandas. I pip installed the pandas but do not know how to pack it into whl package. May I ask what I should do to pack it. Thanks
You cannot pack back an installed wheel. Either you download a ready-made wheel with pip download or build from sources: python setup.py bdist_wheel (need to download the sources first).
1.2
true
1
6,436
2019-12-07 11:29:54.817
Walls logic in Pygame
I'm making a game with Pygame, and now I stuck on how to process collision between player and wall. This is 2D RPG with cells, where some of them are walls. You look on world from top, like in Pacman. So, I know that i can get list of collisions by pygame.spritecollide() and it will return me list of objects that player collides. I can get "collide rectangle" by player.rect.clip(wall.rect), but how I can get player back from the wall? So, I had many ideas. The first was push player back in opposite direction, but if player goes, as example, both right and bottom directions and collide with vertical wall right of itself, player stucks, because it is needed to push only left, but not up. The second idea was implement diagonally moving like one left and one bottom. But in this way we don't now, how move first: left or bottom, and order becomes the most important factor. So, I don't know what algorithm I should use.
If you know the location of the centre of the cell and the location of the player you can calculate the x distance and the y distance from the wall at that point in time. Would it be possible at that point to take the absolute value of each distance and then take the largest value as the direction to push the player in. e.g. The player collides with the right of the wall so the distance from the centre of the wall in the y direction should be less than the distance in x. Therefore you know that the player collided with the left or the right of the wall and not the top, this means the push should be to the right or the left. If the player's movement is stored as in the form [x, y] then knowing whether to push left or right isn't important since flipping the direction of movement in the x axis gives the correct result. The push should therefore be in the x direction in this example e.g. player.vel_x = -player.vel_x. This would leave the movement in the y axis unchanged so hopefully wouldn't result in the problem you mentioned. Does that help?
1.2
true
1
6,437
2019-12-07 17:17:25.127
How to load values related to selected multiple options in Django
Thank you all for always willing to help. I have a Django app with countries and state choices fields. However, I have no idea whatsoever on how to load the related states for each country. What I mean here is, if I choose "Nigeria" in the list of countries, how can I make all Nigerian states to automatically load in the state choice field?
You have to create many to many field state table, then you can multiple select state as per country. this feature available on django- country package or django- cities package.
0
false
1
6,438
2019-12-08 17:44:54.277
Find how similar a text is - One Class Classifier (NLP)
I have a big dataset containing almost 0.5 billions of tweets. I'm doing some research about how firms are engaged in activism and so far, I have labelled tweets which can be clustered in an activism category according to the presence of certain hashtags within the tweets. Now, let's suppose firms are tweeting about an activism topic without inserting any hashtag in the tweet. My code won't categorized it and my idea was to run a SVM classifier with only one class. This lead to the following question: Is this solution data-scientifically feasible? Does exists any other one-class classifier? (Most important of all) Are there any other ways to find if a tweet is similar to the ensable of tweets containing activism hashtags? Thanks in advance for your help!
Sam H has a great answer about using your dataset as-is, but I would strongly recommend annotating data so you have a few hundred negative examples, which should take less than an hour. Depending on how broad your definition of "activism" is that should be plenty to make a good classifier using standard methods.
0.201295
false
1
6,439
2019-12-09 10:34:14.533
Existing Tensorflow model to use GPU
I made a TensorFlow model without using CUDA, but it is very slow. Fortunately, I gained access to a Linux server (Ubuntu 18.04.3 LTS), which has a Geforce 1060, also the necessary components are installed - I could test it, the CUDA acceleration is working. The tensorflow-gpu package is installed (only 1.14.0 is working due to my code) in my virtual environment. My code does not contain any CUDA-related snippets. I was assuming that if I run it in a pc with CUDA-enabled environment, it will automatically use it. I tried the with tf.device('/GPU:0'): then reorganizing my code below it, didn't work. I got a strange error, which said only XLA_CPU, CPU and XLA_GPU is there. I tried it with XLA_GPU but didn't work. Is there any guide about how to change existing code to take advantage of CUDA?
Not enough to give exact answer. Have you installed tensorflow-gpu separately? Check using pip list. Cause, initially, you were using tensorflow (default for CPU). Once you use want to use Nvidia, make sure to install tensorflow-gpu. Sometimes, I had problem having both installed at the same time. It would always go for the CPU. But, once I deleted the tensorflow using "pip uninstall tensorflow" and I kept only the GPU version, it worked for me.
0
false
1
6,440
2019-12-09 12:23:35.760
how to select the metric to optimize in sklearn's fit function?
When using tensorflow to train a neural network I can set the loss function arbitrarily. Is there a way to do the same in sklearn when training a SVM? Let's say I want my classifier to only optimize sensitivity (regardless of the sense of it), how would I do that?
This is not possible with Support Vector Machines, as far as I know. With other models you might either change the loss that is optimized, or change the classification threshold on the predicted probability. SVMs however minimize the hinge loss, and they do not model the probability of classes but rather their separating hyperplane, so there is not much room for manual adjustements. If you need to focus on Sensitivity or Specificity, use a different model that allows maximizing that function directly, or that allows predicting the class probabilities (thinking Logistic Regressions, Tree based methods, for example)
1.2
true
1
6,441
2019-12-09 16:29:01.433
conda environment: does each new conda environment needs a new kernel to work? How can I have specific libraries for all my environments?
I use ubuntu (through Windows Subsystem For Linux) and I created a new conda environment, I activated it and I installed a library in it (opencv). However, I couldn't import opencv in Jupyter lab till I created a new kernel that it uses the path of my new conda environment. So, my questions are: Do I need to create a new kernel every time I create a new conda environment in order for it to work? I read that in general we should use kernels for using different versions of python, but if this is the case, then how can I use a specific conda environment in jupyter lab? Note that browsing from Jupyter lab to my new env folder or using os.chdir to set up the directory didn't work. Using the new kernel that it's connected to the path of my new environment, I couldn't import matplotlib and I had to activate the new env and install there again the matplotlib. However, matplotlib could be imported when I was using the default kernel Python3. Is it possible to have some standard libraries to use them with all my conda environments (i.e. install some libraries out of my conda environments, like matplotlib and use them in all my enviroments) and then have specific libraries in each of my environments? I have installed some libraries through the base environment in ubuntu but I can't import these in my new conda environment. Thanks in advance!
To my best understanding: You need ipykernel in each of the environments so that jupyter can import the other library. In my case, I have a new environment called TensorFlow, then I activate it and install the ipykernel, and then add it to jupyter kernelspec. Finally I can access it in jupyter no matter the environment is activated or not.
0.201295
false
1
6,442
2019-12-11 15:38:28.387
JupyterLab - how to find out which python venv is my session running on?
I am running venv based kernel and I am getting trouble in returning a proper answer from which python statement from my JupyterLab notebook. When running this command from terminal where I have my venv activated it works (it returns a proper venv path ~/venvs/my_venv/bin/python), but it does not work in the notebook. !which python returns the host path: /usr/bin/python I have already tried with os.system() and subprocess, but with no luck. Does anyone know how to execute this command from the Jupyter notebook?
It sounds like you are starting the virtual environment inside the notebook, so that process's PATH doesn't reflect the modifications made by the venv. Instead, you want the path of the kernel that's actually running: that's sys.executable.
1.2
true
2
6,443
2019-12-11 15:38:28.387
JupyterLab - how to find out which python venv is my session running on?
I am running venv based kernel and I am getting trouble in returning a proper answer from which python statement from my JupyterLab notebook. When running this command from terminal where I have my venv activated it works (it returns a proper venv path ~/venvs/my_venv/bin/python), but it does not work in the notebook. !which python returns the host path: /usr/bin/python I have already tried with os.system() and subprocess, but with no luck. Does anyone know how to execute this command from the Jupyter notebook?
maybe it's because you are trying to run the command outside venv try source /path/to/venv/bin/active first and then try which python
-0.201295
false
2
6,443
2019-12-12 05:29:34.497
Can a db handle be passed from a perl script to a python script?
I've been trying to look for ways to call my python script from my perl script and pass the database handle from there while calling it. I don't want to establish another connection in my python script and just use the db handle which is being used by the perl script. Is it even possible and if yes then how?
There answer is that almost all databases (Oracle, MySQL, Postgresql) will NOT allow you to pass open DB connections between processes (even parent/child). This is a limit on the databases connection, which will usually be associated with lot of state information. If it was possible to 'share' such a connection, it will be a challenge for the system to know where to ship the results for queries sent to the database (will the result go to the parent, or to the child ?). Even if it is possible somehow to forward connection between processes, trying to pass a complex object (database connection is much more the socket) between Perl (usually DBI), and Python is close to impossible. The 'proper' solution is to pass the database connection string, username, and password to the Python process, so that it can establish it's own connection.
1.2
true
1
6,444
2019-12-12 10:03:34.553
Error when installing Tensorflow - Python 3.8
I'm new to programming and following a course where I must install Tensorflow. The issue is that I'm using Python 3.8 which I understand isn't supported by Tensorflow. I've downloaded Python 3.6 but I don't know how to switch this as my default version of python. Would it be best to set up a venv using python 3.6 for my program and install Tensorflow in this venv? Also, I using Windows and Powershell.
If you don't want to use Anaconda or virtualenv, then actually multiple Python versions can live side by side. I use Python38 as my default and Python35 for TensorFlow until they release it for Python38. If you wish to use the "non-default" Python, just invoke with the full path of the python.exe (or create a shortcut/batch file for it). Python then will take care of using the correct Python libs for that version.
0
false
2
6,445
2019-12-12 10:03:34.553
Error when installing Tensorflow - Python 3.8
I'm new to programming and following a course where I must install Tensorflow. The issue is that I'm using Python 3.8 which I understand isn't supported by Tensorflow. I've downloaded Python 3.6 but I don't know how to switch this as my default version of python. Would it be best to set up a venv using python 3.6 for my program and install Tensorflow in this venv? Also, I using Windows and Powershell.
it would have been nice if you would have the share the error screenshot though as per i got the case tensorflow work in both 3.8 and 3.6 just you have to check that you have 64bit version not 32 bit you can acess both version from thier respective folder no need to install a venv
0
false
2
6,445
2019-12-12 14:51:24.720
How to search pattern in big binary files efficiently
I have several binary files, which are mostly bigger than 10GB. In this files, I want to find patterns with Python, i.e. data between the pattern 0x01 0x02 0x03 and 0xF1 0xF2 0xF3. My problem: I know how to handle binary data or how I use search algorithms, but due to the size of the files it is very inefficient to read the file completely first. That's why I thought it would be smart to read the file blockwise and search for the pattern inside a block. My goal: I would like to have Python determine the positions (start and stop) of a found pattern. Is there a special algorithm or maybe even a Python library that I could use to solve the problem?
The common way when searching a pattern in a large file is to read the file by chunks into a buffer that has the size of the read buffer + the size of the pattern - 1. On first read, you only search the pattern in the read buffer, then you repeatedly copy size_of_pattern-1 chars from the end of the buffer to the beginning, read a new chunk after that and search in the whole buffer. That way, you are sure to find any occurence of the pattern, even if it starts in one chunk and ends in next.
0.995055
false
1
6,446