Search is not available for this dataset
type
class label 1
class | id
stringlengths 7
7
| subreddit.id
stringclasses 1
value | subreddit.name
stringclasses 1
value | subreddit.nsfw
bool 1
class | created_utc
unknown | permalink
stringlengths 61
109
| body
large_stringlengths 0
9.98k
| sentiment
float32 -1
1
⌀ | score
int32 -65
195
|
---|---|---|---|---|---|---|---|---|---|
1comment
| hxt3lrw | 2r97t | datasets | false | "2022-02-21T07:37:05Z" | https://old.reddit.com/r/datasets/comments/sx2d6e/the_open_industrial_data_project_oil_gas_industry/hxt3lrw/ | Thanks for this! Saving it for later | 0.4926 | 1 |
1comment
| hxt3ckj | 2r97t | datasets | false | "2022-02-21T07:33:50Z" | https://old.reddit.com/r/datasets/comments/svjt73/request_for_dataset_of_images_of_sea_sponges/hxt3ckj/ | Hi, did you find any such image datasets? I only found textual datasets | 0 | 1 |
1comment
| hxt36mg | 2r97t | datasets | false | "2022-02-21T07:31:44Z" | https://old.reddit.com/r/datasets/comments/sxnp83/explaining_what_epochs_batches_datasets_and_loss/hxt36mg/ | Hey SwitchArtistic2709,
Sorry, I am removing this post because Youtube and associated domains have been restricted on this subreddit.
Please consider using a different format than video for your post.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.1779 | 1 |
1comment
| hxt2mks | 2r97t | datasets | false | "2022-02-21T07:24:39Z" | https://old.reddit.com/r/datasets/comments/sxapzp/how_can_i_get_this_dataset_to_show_the_proper/hxt2mks/ | So I was able to figure it out, but I don't think I did it in the most efficient way possible. What I did was make a whole other data frame using the same URL request as the initial one, however this time instead of 'features', I used 'fieldAliases'. I then took that row from the second dataframe and assigned it as the column names for the initial dataframe. Is there a faster approach to this where I don't have to make 2 dataframes? | 0.6297 | 1 |
1comment
| hxsxqn5 | 2r97t | datasets | false | "2022-02-21T06:27:54Z" | https://old.reddit.com/r/datasets/comments/sxapzp/how_can_i_get_this_dataset_to_show_the_proper/hxsxqn5/ | Hmm does this not just remove the word "attribute" from the column name? When I do this I get a ValueError that says:
Length mismatch: Expected axis has 36 elements, new values have 17 elements.
Is there a way to assign the column names as something directly from the Json file? For example I don't want it to say "P01000", I want it to say "Arsenic Dissolved as As (µg/L)". I assuming I have to somehow access the fieldaliases dictionary but I can't figure it out | 0.3069 | 1 |
1comment
| hxsi1mn | 2r97t | datasets | false | "2022-02-21T03:56:12Z" | https://old.reddit.com/r/datasets/comments/sx2d6e/the_open_industrial_data_project_oil_gas_industry/hxsi1mn/ | Wow, thanks ! | null | 1 |
1comment
| hxsa31q | 2r97t | datasets | false | "2022-02-21T02:50:41Z" | https://old.reddit.com/r/datasets/comments/mnf0ip/looking_for_a_job_postings_dataset_please_help/hxsa31q/ | Hey, were you able to locate this dataset? I'm working on a similar project and this would help a lot! | 0.4574 | 1 |
1comment
| hxs2v8p | 2r97t | datasets | false | "2022-02-21T01:53:39Z" | https://old.reddit.com/r/datasets/comments/sx2d6e/the_open_industrial_data_project_oil_gas_industry/hxs2v8p/ | Saving for later… | 0 | 1 |
1comment
| hxs2sig | 2r97t | datasets | false | "2022-02-21T01:53:04Z" | https://old.reddit.com/r/datasets/comments/qohgbh/looking_for_a_jokesperminute_dataset_for_comedy/hxs2sig/ | You're welcome 😁 | 0.7184 | 2 |
1comment
| hxs2qum | 2r97t | datasets | false | "2022-02-21T01:52:41Z" | https://old.reddit.com/r/datasets/comments/qohgbh/looking_for_a_jokesperminute_dataset_for_comedy/hxs2qum/ | Thank you! | null | 1 |
1comment
| hxre4qf | 2r97t | datasets | false | "2022-02-20T22:44:05Z" | https://old.reddit.com/r/datasets/comments/rkgnl4/selfpromotion_sp_500_stock_and_company_data_daily/hxre4qf/ | Hey, Can I know how you wrote the scraper for this? Like what did you use and what is the website that you are scraping it from? | 0.4998 | 1 |
1comment
| hxrba8j | 2r97t | datasets | false | "2022-02-20T22:23:59Z" | https://old.reddit.com/r/datasets/comments/sxapzp/how_can_i_get_this_dataset_to_show_the_proper/hxrba8j/ | df.columns = [col.replace("attributes.", "") for col in df.columns] | 0 | 1 |
1comment
| hxr9w71 | 2r97t | datasets | false | "2022-02-20T22:14:14Z" | https://old.reddit.com/r/datasets/comments/sxapzp/how_can_i_get_this_dataset_to_show_the_proper/hxr9w71/ | After more putzing around I was able to figure out that CSV is the absolute incorrect choice here haha. I was then able to stumble my way into the best results I've gotten so far thanks to a post on stack exchange which basically just let me copy paste the code here:
import requests
import json
import pandas as pd
data= requests.get("URL here")
json_data = data.json()
pd.json_normalize(json_data["features"])
However when I do this, the column names in the dataframe are still incorrect and show things like attributes.COUNTY instead of County. I think what I want it to display is the values in fieldAliases in the JSON file here
https://gisdata-njdep.opendata.arcgis.com/datasets/ambient-metals-of-new-jersey/api
But I'm not able to figure out how to do that. Any guidance? | 0.8146 | 1 |
1comment
| hxqv93d | 2r97t | datasets | false | "2022-02-20T20:30:45Z" | https://old.reddit.com/r/datasets/comments/swo5h4/looking_for_interesting_automotive_datasets/hxqv93d/ | Wow that is great, thank you! | 0.8932 | 1 |
1comment
| hxqjthg | 2r97t | datasets | false | "2022-02-20T19:10:29Z" | https://old.reddit.com/r/datasets/comments/sx4bar/what_are_some_usecases_of_a_mobile_app_screenshot/hxqjthg/ | Would be interesting to pull apps that are top rated / have highest frequency of downloads and see if you can train to see what similar features or UI elements are the same. Just a thought but niche and nice project to set out on — maybe you’ll stumble upon what features make an app great! | 0.9168 | 1 |
1comment
| hxq8g01 | 2r97t | datasets | false | "2022-02-20T17:53:08Z" | https://old.reddit.com/r/datasets/comments/swo5h4/looking_for_interesting_automotive_datasets/hxq8g01/ | [The Vehicle Energy Dataset](https://github.com/gsoh/VED) might be interesting | 0.5859 | 1 |
1comment
| hxq0wkb | 2r97t | datasets | false | "2022-02-20T17:02:12Z" | https://old.reddit.com/r/datasets/comments/swxon0/aircraft_accidents_failures_hijacks_dataset/hxq0wkb/ | This reminds me of my first data science project. I took the bird strike data from across the United States, the type where a bird hits a plane or gets sucked through an engine.
I found out that JFK kills the most birds in the winter. | -0.6369 | 2 |
1comment
| hxpvnip | 2r97t | datasets | false | "2022-02-20T16:26:52Z" | https://old.reddit.com/r/datasets/comments/sx4bar/what_are_some_usecases_of_a_mobile_app_screenshot/hxpvnip/ | There is a Twitter bot on Reddit which recognizes that a screenshot is a tweet and then find the actual tweet. Not sure if that's something that you'd classify as a valid test case for you. You can possibly expand on the idea for similar apps? | 0.0869 | 2 |
1comment
| hxov839 | 2r97t | datasets | false | "2022-02-20T10:45:21Z" | https://old.reddit.com/r/datasets/comments/swyhms/looking_for_a_spatial_telemetry_dataset/hxov839/ | So theres data from bike hire places. Picked up here dropped off there. But that is not exact a->b route.
bikes in general [https://www.reddit.com/r/datasets/search/?q=bike&restrict\_sr=1&sr\_nsfw=](https://www.reddit.com/r/datasets/search/?q=bike&restrict_sr=1&sr_nsfw=)
helsinki [https://www.reddit.com/r/datasets/comments/et9iof/dataset\_helsinki\_bike\_trips/](https://www.reddit.com/r/datasets/comments/et9iof/dataset_helsinki_bike_trips/)
dublin [https://data.gov.ie/dataset/dublinbikes-api](https://data.gov.ie/dataset/dublinbikes-api)
​
Dublin tram gps data [https://www.reddit.com/r/datasets/comments/s01asw/historical\_data\_set\_of\_dublins\_tram\_real\_time/](https://www.reddit.com/r/datasets/comments/s01asw/historical_data_set_of_dublins_tram_real_time/)
​
Big list of bike and scooter apis here [https://www.reddit.com/r/datasets/comments/cn92f4/documentation\_of\_scooter\_bike\_sharing\_apis/](https://www.reddit.com/r/datasets/comments/cn92f4/documentation_of_scooter_bike_sharing_apis/) | 0 | 1 |
1comment
| hxnzgso | 2r97t | datasets | false | "2022-02-20T04:24:00Z" | https://old.reddit.com/r/datasets/comments/swh25t/looking_for_copypasta_or_4chan_post_dataset/hxnzgso/ | I don’t know about datasets but most boards have archives, some are available for download. Just go there and ask. | 0 | 1 |
1comment
| hxnn7ut | 2r97t | datasets | false | "2022-02-20T02:37:42Z" | https://old.reddit.com/r/datasets/comments/swfw3j/looking_for_normally_distributed_biological_data/hxnn7ut/ | you can check out PhysioNet | 0 | 1 |
1comment
| hxnikaq | 2r97t | datasets | false | "2022-02-20T01:59:02Z" | https://old.reddit.com/r/datasets/comments/swh25t/looking_for_copypasta_or_4chan_post_dataset/hxnikaq/ | That's technically an option but I'd much rather use an already existing dataset because ideally i want posts from a long time ago. | 0.631 | 1 |
1comment
| hxn9a7q | 2r97t | datasets | false | "2022-02-20T00:42:58Z" | https://old.reddit.com/r/datasets/comments/swfw3j/looking_for_normally_distributed_biological_data/hxn9a7q/ | Height and weight are available from NHANES: https://wwwn.cdc.gov/Nchs/Nhanes/2015-2016/BMX_I.htm | 0 | 3 |
1comment
| hxn7m7f | 2r97t | datasets | false | "2022-02-20T00:29:40Z" | https://old.reddit.com/r/datasets/comments/swo5h4/looking_for_interesting_automotive_datasets/hxn7m7f/ | Hey charbo6,
I believe a `request` flair might be more appropriate for such post. Please re-consider and change the post flair if needed.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.5574 | 1 |
1comment
| hxn6vkl | 2r97t | datasets | false | "2022-02-20T00:23:45Z" | https://old.reddit.com/r/datasets/comments/swh25t/looking_for_copypasta_or_4chan_post_dataset/hxn6vkl/ | Just crawl it using Beautiful Soup or something comparable. | 0.5994 | 3 |
1comment
| hxn07ny | 2r97t | datasets | false | "2022-02-19T23:31:33Z" | https://old.reddit.com/r/datasets/comments/sr3ph1/lean_six_sigma_dataset_for_process_improvement/hxn07ny/ | Thanks a ton! | null | 1 |
1comment
| hxmx9wa | 2r97t | datasets | false | "2022-02-19T23:08:51Z" | https://old.reddit.com/r/datasets/comments/swfw3j/looking_for_normally_distributed_biological_data/hxmx9wa/ | One user already mentioned height and weight. IQ scores are also normally distributed, but by design. | 0 | 0 |
1comment
| hxmx67n | 2r97t | datasets | false | "2022-02-19T23:08:02Z" | https://old.reddit.com/r/datasets/comments/swfw3j/looking_for_normally_distributed_biological_data/hxmx67n/ | [deleted] | null | 1 |
1comment
| hxmsv5h | 2r97t | datasets | false | "2022-02-19T22:35:32Z" | https://old.reddit.com/r/datasets/comments/swltc7/please_take_part_in_survey_our_worries/hxmsv5h/ | All survey's have to be verified by the moderators for compliance with the rules. Your survey must include a publicly accessible resource to view responses and must NOT collect personal information. We verify each survey posted for compliance which may take a few days. Once approved you will be free to re-post your survey if desired. If this post is NOT a survey and removed in error we apologize. The Automod config looks for the word "survey" and auto removes the post for moderation, we will get to approving your post as soon as possible.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.802 | 1 |
1comment
| hxmb84c | 2r97t | datasets | false | "2022-02-19T20:26:03Z" | https://old.reddit.com/r/datasets/comments/sr3ph1/lean_six_sigma_dataset_for_process_improvement/hxmb84c/ | Bosch's production line performance dataset from Kaggle competition might be useful to work on.
https://www.kaggle.com/c/bosch-production-line-performance/overview | 0.4404 | 1 |
1comment
| hxm98dc | 2r97t | datasets | false | "2022-02-19T20:11:36Z" | https://old.reddit.com/r/datasets/comments/swfw3j/looking_for_normally_distributed_biological_data/hxm98dc/ | If you don't care about what data you're working with, height and weight data is normally distributed. You can google height and weight dataset and I'm sure you'll find one you like? | 0.2896 | 8 |
1comment
| hxm5lkd | 2r97t | datasets | false | "2022-02-19T19:46:07Z" | https://old.reddit.com/r/datasets/comments/sw5sg4/need_thesis_topic_for_data_science_particularly/hxm5lkd/ | Prediction of what is going to get him a high grade on his thesis 😂 | 0.4404 | 1 |
1comment
| hxkfixc | 2r97t | datasets | false | "2022-02-19T11:36:26Z" | https://old.reddit.com/r/datasets/comments/q0qsh9/a_free_financial_news_dataset_going_back_from/hxkfixc/ | Those two could be useful, altough it's really hard to find good datasets for free: [https://www.kaggle.com/miguelaenlle/massive-stock-news-analysis-db-for-nlpbacktests](https://www.kaggle.com/miguelaenlle/massive-stock-news-analysis-db-for-nlpbacktests) and [https://www.kaggle.com/gennadiyr/us-equities-news-data](https://www.kaggle.com/gennadiyr/us-equities-news-data). Also [this](https://github.com/philipperemy/financial-news-dataset/network/members) github repo contains the entire reuters news dataset. It has been removed for copyright reasons but forks still exist.
If you want to have the most complete dataset, look at the [historical news endpoint at iex](https://iexcloud.io/docs/api/#historical-news). It only allows for 5000 free news articles per month though and is paid after that. | 0.8626 | 1 |
1comment
| hxker3a | 2r97t | datasets | false | "2022-02-19T11:25:49Z" | https://old.reddit.com/r/datasets/comments/lhls5s/looking_for_dataset_with_all_10000_cryptopunks_by/hxker3a/ | Thanks!! | null | 1 |
1comment
| hxk536j | 2r97t | datasets | false | "2022-02-19T09:11:07Z" | https://old.reddit.com/r/datasets/comments/sw5sg4/need_thesis_topic_for_data_science_particularly/hxk536j/ | Segmentation and prediction of what?
As in segmenting images so that people in them are separated out.
Or predicting who is going to win the nextworld cup? | 0.6322 | 5 |
1comment
| hxk52g1 | 2r97t | datasets | false | "2022-02-19T09:10:49Z" | https://old.reddit.com/r/datasets/comments/sw5sg4/need_thesis_topic_for_data_science_particularly/hxk52g1/ | We're not doing your coursework for you...
You must have some interests less vague than two entire subject areas. | 0.2247 | 4 |
1comment
| hxisrfj | 2r97t | datasets | false | "2022-02-19T01:05:07Z" | https://old.reddit.com/r/datasets/comments/svkrt8/crisiseventsorg_international_crisis_behaviors/hxisrfj/ | This is super cool! | 0.7574 | 2 |
1comment
| hxhaq66 | 2r97t | datasets | false | "2022-02-18T18:50:40Z" | https://old.reddit.com/r/datasets/comments/svjt73/request_for_dataset_of_images_of_sea_sponges/hxhaq66/ | With friends like you who needs anemones | 0.6808 | 9 |
1comment
| hxgyoe1 | 2r97t | datasets | false | "2022-02-18T17:34:04Z" | https://old.reddit.com/r/datasets/comments/svjt73/request_for_dataset_of_images_of_sea_sponges/hxgyoe1/ | Assuming you have seen this as it is one of the first results on Google, but if not this dataset from [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Demospongiae) might be of use. Otherwise there seemed to be some coral datasets on Kaggle, but none for sea sponges.
I also found [SponGIS](https://spongis.org/) who seem to be mapping sea sponge data, you could contact them and see if they share their data for educational purposes. I'm not sure they'll have an image dataset though. | 0.0918 | 7 |
1comment
| hxgfxw7 | 2r97t | datasets | false | "2022-02-18T15:34:51Z" | https://old.reddit.com/r/datasets/comments/svkrt8/crisiseventsorg_international_crisis_behaviors/hxgfxw7/ | Corresponding author here if anyone has any questions. | 0 | 5 |
1comment
| hxgfvsz | 2r97t | datasets | false | "2022-02-18T15:34:29Z" | https://old.reddit.com/r/datasets/comments/svkrt8/crisiseventsorg_international_crisis_behaviors/hxgfvsz/ | Hey locallyoptimal,
I believe a `request` flair might be more appropriate for such post. Please re-consider and change the post flair if needed.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/datasets) if you have any questions or concerns.* | 0.5574 | 1 |
1comment
| hxfmjb0 | 2r97t | datasets | false | "2022-02-18T11:24:02Z" | https://old.reddit.com/r/datasets/comments/eyc3iw/oc_359_annotated_6sided_dice_faces_160_images_in/hxfmjb0/ | [removed] | null | 1 |
1comment
| hxffd4j | 2r97t | datasets | false | "2022-02-18T09:48:29Z" | https://old.reddit.com/r/datasets/comments/qohgbh/looking_for_a_jokesperminute_dataset_for_comedy/hxffd4j/ | Drew Gooden on Youtube did a really wonderful couple of these.
The one I'm linking is his second & sums up his first one and then ramps it up by a ton. It's pretty hecking great.
https://youtu.be/0SIIRGgWVb8 | 0.9061 | 2 |
1comment
| hxf9ixb | 2r97t | datasets | false | "2022-02-18T08:29:29Z" | https://old.reddit.com/r/datasets/comments/n777o8/is_there_any_image_dataset_on_vegetables_and/hxf9ixb/ | [removed] | null | 1 |
1comment
| hxf8e8i | 2r97t | datasets | false | "2022-02-18T08:14:18Z" | https://old.reddit.com/r/datasets/comments/o2h9ca/looking_for_sales_related_email_dataset/hxf8e8i/ | Have you found any yet? I need the same. | 0 | 1 |
1comment
| hxf2vur | 2r97t | datasets | false | "2022-02-18T07:05:37Z" | https://old.reddit.com/r/datasets/comments/sut3x7/this_data_set_contains_everything_necessary_to/hxf2vur/ | Oh man this would be awesome as a VR environment | 0.6249 | 1 |
1comment
| hxf0s96 | 2r97t | datasets | false | "2022-02-18T06:41:03Z" | https://old.reddit.com/r/datasets/comments/sv8mpf/help_on_dataset_tables_with_blank_fields/hxf0s96/ | Ahh yes! This is what I'm looking for thank you for the response | 0.6696 | 1 |
1comment
| hxew396 | 2r97t | datasets | false | "2022-02-18T05:49:57Z" | https://old.reddit.com/r/datasets/comments/sv8mpf/help_on_dataset_tables_with_blank_fields/hxew396/ | Are you asking how to forward-fill? With pandas (Python) it's only one line of code, but it's pretty easy to find implementations in other languages. With pandas you can use .fillna(method='ffill') and it'll forwardfill the whole dataframe.
As a quick example:
import numpy as np
import pandas as pd
df = pd.DataFrame({'Brand':['Brand1',np.NaN, np.NaN, 'Brand2', np.NaN, np.NaN]})
df['SKU'] = ['a', 'b', 'c', 'd', 'e', 'f']
df['Product'] = ['Description1', np.NaN, np.NaN, 'Description2', np.NaN, np.NaN]
print(df)
gives:
|Brand|SKU|Product|
|:-|:-|:-|
|Brand1|a|Description1|
||b||
||c||
|Brand2|d|Description2|
||e||
||f||
Then:
df = df.fillna(method = 'ffill')
print(df)
gives:
|Brand|SKU|Product|
|:-|:-|:-|
|Brand1|a|Description1|
|Brand1|b|Description1|
|Brand1|c|Description1|
|Brand2|d|Description2|
|Brand2|e|Description2|
|Brand2|f|Description2| | 0.9052 | 5 |
1comment
| hxdrrov | 2r97t | datasets | false | "2022-02-18T00:24:31Z" | https://old.reddit.com/r/datasets/comments/sut3x7/this_data_set_contains_everything_necessary_to/hxdrrov/ | I just saw this posted on the aswf slack today! | 0 | 1 |
1comment
| hxdkljo | 2r97t | datasets | false | "2022-02-17T23:33:08Z" | https://old.reddit.com/r/datasets/comments/ssfy1q/statistics_on_lottery_numbers_picked_by_hand/hxdkljo/ | sorry i mean in a row in the sense of 33,34,35 which looks ot random to people even though it is just as random as 2,17,34 | -0.0772 | 1 |
1comment
| hxctwr3 | 2r97t | datasets | false | "2022-02-17T20:39:55Z" | https://old.reddit.com/r/datasets/comments/ssfy1q/statistics_on_lottery_numbers_picked_by_hand/hxctwr3/ | > They dont pick two or three numbers in a row as these dont look random.
If I recall correctly, [a drawn number is eliminated from the pool of next numbers](https://www.youtube.com/watch?v=CdJT_cVkApo) as it is not tossed back into the mix.
So it would be foolish to pick the same number due to impossibility of occurance? | -0.2168 | 1 |
1comment
| hxcphlo | 2r97t | datasets | false | "2022-02-17T20:11:24Z" | https://old.reddit.com/r/datasets/comments/sq7jw2/recommendation_for_venture_capital_datasets/hxcphlo/ | Yup, sorry about it. Some schools have access to VC/PE databases, but very few at that. Good luck with your project.
FWIW I think using firm characteristics to get at VC outcomes is a very interesting idea if you can ever find a data source! | 0.9183 | 1 |
1comment
| hxcjj66 | 2r97t | datasets | false | "2022-02-17T19:34:00Z" | https://old.reddit.com/r/datasets/comments/sus2wp/nba_individual_player_stats_for_every_game/hxcjj66/ | https://github.com/saiemgilani/hoopR | null | 1 |
1comment
| hxccu6b | 2r97t | datasets | false | "2022-02-17T18:51:59Z" | https://old.reddit.com/r/datasets/comments/sq7jw2/recommendation_for_venture_capital_datasets/hxccu6b/ | You are right, I have had to change my project as I couldn't afford to spend my annual student load to get access to it 😝 | 0 | 1 |
1comment
| hxbi1er | 2r97t | datasets | false | "2022-02-17T15:40:33Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hxbi1er/ | So you definitely need the leading "\" before the $.
Based on what you're showing, I might also change it to
\$([A-Z]{1,4})(\w)
Which will only match capital letters for the ticker symbol. The second group was also capturing more letters than required by your use case so you don't need the {1,3}. | 0.4549 | 1 |
1comment
| hxbhfrn | 2r97t | datasets | false | "2022-02-17T15:36:32Z" | https://old.reddit.com/r/datasets/comments/pp3v25/looking_for_public_datasets_on_baseball/hxbhfrn/ | Go look in the mirror freak | -0.4404 | 0 |
1comment
| hxbdqd0 | 2r97t | datasets | false | "2022-02-17T15:11:46Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hxbdqd0/ | Thanks for your help boss, looks like this fixed it:
"\\$...." and "$0 " | 0.7964 | 1 |
1comment
| hxbddg3 | 2r97t | datasets | false | "2022-02-17T15:09:21Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hxbddg3/ | Okay so couple things 1) When I tried to put $(\[A-z\]{1,4})(\\w{0,3}) in the find box and put \\1 \\2 in the replace box, while making sure regular expression is checked here is what it showed:
[https://i.imgur.com/ezBXzB7.png](https://i.imgur.com/ezBXzB7.png)
Guessing probably I'm doing smth wrong?
Second, here is a sample of my data below so you can have an idea
EX 1: $UVXY cld move nicely! RSI/ADX looks primed! Chart cld see $20 quick! GL ???? $OSCI $OWUV $IGEX $ICOA $NICH $APSI116DB@DBTradePicks·22hThis is the same thing I said today ??
Here is another:
EX 2: WildRhino·10h $APSIQuote TweetTHE TRADING JOKER™
As you see its the ticker $APSI and in both instances its stuck to another words or another digit so I want to be able to look at every line > and anything that starts with "$" should be skipped 4 characters to conserve the ticker symbol and then a space should be added to separate the ticker from the word or digit its stuck too if that makes sense.
Edit: So looks like this got it done using notepad++ and regex $\\$...." and "$0 " | 0.8875 | 1 |
1comment
| hxbb123 | 2r97t | datasets | false | "2022-02-17T14:53:13Z" | https://old.reddit.com/r/datasets/comments/sui9ls/do_covid_deaths_correlate_to_party_affiliation/hxbb123/ | I do wonder if since the Census was done during 2020 (when vaccines were not widely available), if the redistricting will be done with data from before deaths by covid began to be correlated by party affiliation. | 0.4019 | 2 |
1comment
| hxb9sxw | 2r97t | datasets | false | "2022-02-17T14:44:50Z" | https://old.reddit.com/r/datasets/comments/sui9ls/do_covid_deaths_correlate_to_party_affiliation/hxb9sxw/ | I doubt it will have an effect. Any demographic changes can be countered easily with gerrymandering. | -0.0258 | 3 |
1comment
| hxb9i7u | 2r97t | datasets | false | "2022-02-17T14:42:45Z" | https://old.reddit.com/r/datasets/comments/st5n7i/need_a_dataset_with_at_least_20_pairs_of_data_for/hxb9i7u/ | No problem, always happy to help :) | 0.8924 | 1 |
1comment
| hxb1wen | 2r97t | datasets | false | "2022-02-17T13:46:07Z" | https://old.reddit.com/r/datasets/comments/st5n7i/need_a_dataset_with_at_least_20_pairs_of_data_for/hxb1wen/ | i know i already replied but i was literally able to finish my project because of u thank u sm | 0.5023 | 1 |
1comment
| hxayi7y | 2r97t | datasets | false | "2022-02-17T13:18:14Z" | https://old.reddit.com/r/datasets/comments/sui9ls/do_covid_deaths_correlate_to_party_affiliation/hxayi7y/ | Quick responses here:
NPR recently did something similar. I would suggest starting there. Article is titled “Pro-Trump counties now have far higher COVID death rates. Misinformation is to blame”
Second, I would check out CDC Wonder. You’ll need to do some research into the MCOD and UCOD codes, and there is an issue with small counts being suppressed (<10), but it could be used for larger counties. I don’t think it goes down to the census tract and if it did, you’d likely run into the suppression issue. | -0.4767 | 7 |
1comment
| hxad74c | 2r97t | datasets | false | "2022-02-17T09:11:32Z" | https://old.reddit.com/r/datasets/comments/sqx538/nfl_playerteam_statistics_datasheets_galore/hxad74c/ | This is really neat! You may be interested in checking out nflfastr.com they do some pretty impressive work on stuff related to this | 0.915 | 1 |
1comment
| hxa2l8z | 2r97t | datasets | false | "2022-02-17T06:53:23Z" | https://old.reddit.com/r/datasets/comments/sui9ls/do_covid_deaths_correlate_to_party_affiliation/hxa2l8z/ | Johns Hopkins is an obvious source. They have broken data down to at least county level. Census tract would be better. You’ll probably need to combine with an additional source to correlate with congressional district and party affiliation.
Problems: reporting probably is mostly location of treatment rather than by residence location. Fudging for % of party affiliation unless you limit to nearly one-party locales. Reporting delays, ambiguity over actual dates though probably not a big issue for this. The general poor Balkanized state of reporting with every locale following their own standards cause ‘Murica.
This is the map entry point but explore you can get raw data sets.
https://coronavirus.jhu.edu/map.html | -0.0772 | 11 |
1comment
| hx9ydw3 | 2r97t | datasets | false | "2022-02-17T06:05:54Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx9ydw3/ | Actually is it always 24H after?
You could get away with find/replace "24H" to " 24H" and just avoid the regex all together. If there's only a handful of suffixes that you need to add the space to then this will be quite quick | -0.296 | 1 |
1comment
| hx9u456 | 2r97t | datasets | false | "2022-02-17T05:22:39Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx9u456/ | Theirs is first capturing something that starts with “$”, has a string of between 1 and 4 letters, upper or lower case (the ticker) then they capture every word character (letter or number) after that into capture group 2. Then you replace with \1”space”\2 to get what you’re asking for. You need to also make sure regex is enabled in your search in notepad++ or it won’t work. Internet search regex and notepad++ | 0.1027 | 1 |
1comment
| hx9trvp | 2r97t | datasets | false | "2022-02-17T05:19:12Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx9trvp/ | \$([A-z]{1,4})(\w{0,3})
Should be in the find box. Don't forget the leading \ it's there for a reason.
The replace will be something like "\1 \2" without quotes.
I really don't know what your data look like and you're going to probably need some changes, so you'll have to learn at least a little regex to figure this out.
You probably don't need to match new lines but it shouldn't really affect this regex | 0.4278 | 1 |
1comment
| hx9trmn | 2r97t | datasets | false | "2022-02-17T05:19:08Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx9trmn/ | Use their find and my replacement above. Should work. | 0 | 1 |
1comment
| hx9tn3v | 2r97t | datasets | false | "2022-02-17T05:17:50Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx9tn3v/ | Thanks for the help boss
Someone below posted this $(\[A-z\]{1,4})(\\w) looks lkinda similar to what you wrote but till I am confused on what to put in the "find what" and "replace with" fields, here is a screenshot below to show:
https://i.imgur.com/nlq2DWl.png | -0.0387 | 1 |
1comment
| hx9smpy | 2r97t | datasets | false | "2022-02-17T05:07:12Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx9smpy/ | Should I check ". Matches newline" ?
Also should $(\[A-z\]{1,4})(\\w) be inserted in the find what or replace with? | 0 | 1 |
1comment
| hx9rp4b | 2r97t | datasets | false | "2022-02-17T04:56:59Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx9rp4b/ | I’m doing this on my phone so can’t be particularly helpful. Basically you need to capture the $symbol and the trailing (>=1) letter characters. The $ can be tricky because it needs to be escaped. Probably need something like find: “(\$\w+)(.+)” replace: “\1 \2”
Again, I haven’t tried this, but it should be something similar. The parentheses say “capture this”, and the captured thing can be referenced by position I.e \1. The \$ searches for a literal “$” since $ is a special character. I can’t remember if \w matches only letters or letters and numbers. | 0.7224 | 1 |
1comment
| hx9r1up | 2r97t | datasets | false | "2022-02-17T04:50:10Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx9r1up/ | \$([A-z]{1,4})(\w)
It's going to be something along these lines. Regex101.com can help you refine it to your specific needs.
This page seems like it'll help with using the capture groups in your replace function:
https://softhints.com/notepad-regex-replace-wildcard-capture-group/ | 0.7845 | 1 |
1comment
| hx9qz82 | 2r97t | datasets | false | "2022-02-17T04:49:25Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx9qz82/ | So I have notepad++ and looks like I need to use its regex function and as you said use the find and replace
Is it a simple code for the find and replace? Anychance you can hook me up with it or is it easy enough for me to watch a couple of vids and learn it? | 0.6966 | 1 |
1comment
| hx9oint | 2r97t | datasets | false | "2022-02-17T04:24:39Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx9oint/ | Yeah looks like regex on notepad++ should do it, but I am not sure what to put in the "find what" section and what to put in the "replace with" maybe you can tell me? If not is their a source you can point me to? | -0.1162 | 1 |
1comment
| hx8nm9t | 2r97t | datasets | false | "2022-02-16T23:37:11Z" | https://old.reddit.com/r/datasets/comments/c24ak4/dataset_image_classification_dataset_for_porn/hx8nm9t/ | I can get you around 25k images I scraped from some nsfw subreddits if you could still use them | 0 | 1 |
1comment
| hx8jpvg | 2r97t | datasets | false | "2022-02-16T23:09:44Z" | https://old.reddit.com/r/datasets/comments/st39oo/dataset_for_yearly_global_cases_of_covid/hx8jpvg/ | JHU has been tracking this for a long time. These dashboards [https://www.inetsoft.com/info/us\_covid\_test\_tracker\_dashboard/](https://www.inetsoft.com/info/us_covid_test_tracker_dashboard/) are based on that data. Take a look. if that's what you want, you can go to JHU to get it. | 0.0772 | 1 |
1comment
| hx8ewu2 | 2r97t | datasets | false | "2022-02-16T22:36:40Z" | https://old.reddit.com/r/datasets/comments/su44je/deidentified_patient_reports_datasets/hx8ewu2/ | What's a patient report? [Physionet](https://physionet.org/about/database/) has a good repo of healthcare/EHR datasets. [MIMIC](https://mimic.mit.edu/docs/) 2/3/4 have free text doctors notes if that's what you're looking for, they should have close to that number of examples. | 0.7579 | 4 |
1comment
| hx8723d | 2r97t | datasets | false | "2022-02-16T21:44:18Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx8723d/ | I believe the CDC has datasets available that track that kind of stuff. | 0 | 1 |
1comment
| hx7zo8s | 2r97t | datasets | false | "2022-02-16T20:56:30Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx7zo8s/ | It seems that your comment contains 1 or more links that are hard to tap for mobile users.
I will extend those so they're easier for our sausage fingers to click!
[Here is link number 1 - Previous text "sed"](https://www.howtogeek.com/666395/how-to-use-the-sed-command-on-linux/)
----
^Please ^PM ^[\/u\/eganwall](http://reddit.com/user/eganwall) ^with ^issues ^or ^feedback! ^| ^[Code](https://github.com/eganwall/FatFingerHelperBot) ^| ^[Delete](https://reddit.com/message/compose/?to=FatFingerHelperBot&subject=delete&message=delete%20hx7zo8s) | 0.7624 | 1 |
1comment
| hx7zmte | 2r97t | datasets | false | "2022-02-16T20:56:15Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx7zmte/ | I’d use Sed + regex.
([sed](https://www.howtogeek.com/666395/how-to-use-the-sed-command-on-linux/) is a command line util on Linux/macOS ) | 0 | 2 |
1comment
| hx785pt | 2r97t | datasets | false | "2022-02-16T17:54:42Z" | https://old.reddit.com/r/datasets/comments/stga82/what_zillow_dataset_backs_their_home_values_page/hx785pt/ | I hadn't figured this out. Thank you this saves me a lot of time. I will try to confirm later. | 0.3612 | 2 |
1comment
| hx75ql2 | 2r97t | datasets | false | "2022-02-16T17:38:53Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx75ql2/ | I think you missed my point.
Ticker symbols in some foreign markets DO have digits.
So relying on the appearance of a digit wouldn’t work in those markets.
Hard to help when you’ve given only a single example, with no context. At least post sample tweets.
Struggling to imagine why anyone would tweet $AAPL24H
Also don’t understand how you are scraping, “getting sentiment on it” and word counts without SOME kind of programming.
Assumption that stock tickets are 4 characters is way wrong.
I’d create a list of the tickers you are actually interested in (which could be as large a set as e.g. “all US equities”) and match against that.
Python, Ruby, Perl, or TCL would be great languages that could do this in a few lines.
You’re really asking in the wrong place though as this has nothing to do with datasets. | -0.296 | 2 |
1comment
| hx74vb9 | 2r97t | datasets | false | "2022-02-16T17:33:24Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx74vb9/ | I know that, but i scrape data from twitter, and sometimes the tickers come clumped with other words which makes it hard to get sentiment on it and word count | -0.1531 | 1 |
1comment
| hx723iv | 2r97t | datasets | false | "2022-02-16T17:15:57Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx723iv/ | US ticker symbols don’t have digits. | 0 | 1 |
1comment
| hx6nb2e | 2r97t | datasets | false | "2022-02-16T15:42:33Z" | https://old.reddit.com/r/datasets/comments/sty9kr/how_to_add_a_space_if_2_conditions_are_met_from_a/hx6nb2e/ | You can do this in excel with the “Text to Columns” feature, it’s pretty intuitive. Problem is a simple 4-spaces rule will not work for every ticker in the market, I suppose it’s possible for a subset, though. Another option is to use regex with find and replace, since ticker symbols don’t have numbers. Some regex key words for you to learn are “capture groups” and “escape characters”. Plenty of tutorials online. You can then implement this in a text editor that supports regex find-and-replace such as Notepad++. | 0.7184 | 4 |
1comment
| hx6aj49 | 2r97t | datasets | false | "2022-02-16T14:14:48Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx6aj49/ | If you work in R you can use scales() (from the dplyr package I think?) to turn any other dataset into its normally distributed variant
Edit: its the scales package and the function is called rescale(), im just an idiot | -0.5106 | 1 |
1comment
| hx5w6ga | 2r97t | datasets | false | "2022-02-16T12:11:05Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx5w6ga/ | although the spirit of the answer was good:
the central limig theorem guarantees sums of randomly sampled numbers tends to normal distributions
try to imagine some kind of phenomena where the number measured is a sum of various random contributions
for example, i would look into total milimiters of water provided by rain in a month or year, because the total is the sum of all the rain fell during the period, and each time it rains the contribution is kind of random
other examples: scores of students in a standard test: the score is the sum of correct answers, and the students will have a distribution over "correct" or "incorrect".
money won at casino: total money is what players win for playing numerous times, each win being at random. | 0.9517 | 1 |
1comment
| hx5vpeo | 2r97t | datasets | false | "2022-02-16T12:06:07Z" | https://old.reddit.com/r/datasets/comments/osjecd/spss_data_analysis_help_for_masters_dissertation/hx5vpeo/ | Hey there,
As I can you facing some problems with the Likert scale. One of my friends also faces the same problem, but he got a solution after consulting with professionals. Silver Lake Consulting or VB analytic these two consulting firms has a team of experts and will surely help you out. They provide every solution related to [**SPSS Data analysis**](https://silverlakeconsult.com/spss-data-analysis/) and with every research. In my opinion, you should consult with them and explain your problem they will surely give you a satisfying outcome. | 0.9513 | 1 |
1comment
| hx5ujg8 | 2r97t | datasets | false | "2022-02-16T11:53:34Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx5ujg8/ | You should be quite successful using google.
If you use R you might as well have a look into the package:
[https://stat.ethz.ch/R-manual/R-patched/library/datasets/html/00Index.html](https://stat.ethz.ch/R-manual/R-patched/library/datasets/html/00Index.html)
They provide a lot of datasets that are often used in teaching or examples. | 0.7346 | 1 |
1comment
| hx5tcsj | 2r97t | datasets | false | "2022-02-16T11:40:26Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx5tcsj/ | This is the whey | 0 | 1 |
1comment
| hx5ng27 | 2r97t | datasets | false | "2022-02-16T10:26:50Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx5ng27/ | You are right, I probably should have clarified I was looking for real life normally distributed data sets. Thanks for the link, hopefully some of these contain a data set which I can use myself, much appreciated. | 0.836 | 1 |
1comment
| hx5nbba | 2r97t | datasets | false | "2022-02-16T10:24:58Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx5nbba/ | I will see if I can find some. Are there special search engines to find this sort of data, or will I achieve success just through google search? | 0.7506 | 1 |
1comment
| hx5n91x | 2r97t | datasets | false | "2022-02-16T10:24:06Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx5n91x/ | Appreciate you sharing the data set. I probably should have specified that I am looking for data sets taken from real life, hopefully I can try and find some real height and weight data. | 0.802 | 1 |
1comment
| hx5ityq | 2r97t | datasets | false | "2022-02-16T09:23:57Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx5ityq/ | IQ scores are designed to be normally distributed.
Quite easy to find as well. | 0.6478 | 2 |
1comment
| hx5ayvf | 2r97t | datasets | false | "2022-02-16T07:39:39Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx5ayvf/ | 1.) How do you know what distribution the random number generator in excel uses? If I’m not mistaken, it follows a uniform distribution which means every number is equally likely, which is to say definitely not normal.
2.) Taking the log of some random distribution and assuming it’s now normal is not advisable. The data would need to exponentially distributed first for this to be applicable. | 0.8131 | 4 |
1comment
| hx59jcb | 2r97t | datasets | false | "2022-02-16T07:21:56Z" | https://old.reddit.com/r/datasets/comments/stga82/what_zillow_dataset_backs_their_home_values_page/hx59jcb/ | You may have already figured this out -- it appears the Homes Values Page uses the "City" geography rather than the "Metro & U.S." geography. Select "City" under the Geography drop-down to download a dataset similar to what is shown in the Homes Values Page.
I wasn't able to read and understand the entire [methodology](https://www.zillow.com/research/zhvi-methodology/), but their formula may have slightly different assumptions in calculating city vs county vs state vs metro & U.S., etc. geographies; someone more knowledgeable can correct me here. | 0.4019 | 4 |
1comment
| hx4yf9l | 2r97t | datasets | false | "2022-02-16T05:18:56Z" | https://old.reddit.com/r/datasets/comments/stnasx/how_do_i_add_a_space_before_a_specific_symbol/hx4yf9l/ | https://pypi.org/project/reticker/ | null | 1 |
1comment
| hx4uijo | 2r97t | datasets | false | "2022-02-16T04:42:32Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx4uijo/ | Technically speaking, [the code in this jupyter notebook](https://colab.research.google.com/drive/1HizPk8bClbFsFn5hat-UszKnqNTAf_2C?usp=sharing) provides a normally distributed data set. The data is generated by the numpy function *np.random.normal()*, and the web address is included in this comment. The relevant code to is:
import numpy as np
import matplotlib.pyplot as plt
mean = 0
standarDeviation = 1
numberOfSamples = 1000
data = np.random.normal(mean, standarDeviation, numberOfSamples)
plt.hist(data)
plt.show() | 0.34 | 3 |
1comment
| hx4u9wc | 2r97t | datasets | false | "2022-02-16T04:40:23Z" | https://old.reddit.com/r/datasets/comments/stjw48/help_finding_data_set_that_has_a_normal/hx4u9wc/ | You could just create a bunch of random numbers in excel. Also you can take the natural logarithm of many datasets and that will make it close to a normal distribution. Doing this is called log transformation so technically you’d say your data is log normally distributed but that’s just details. | 0.5106 | 0 |