The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ParserError Message: Error tokenizing data. C error: Expected 7 fields in line 6, saw 22 Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 233, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2998, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1918, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2093, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1576, in __iter__ for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 279, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables for batch_idx, df in enumerate(csv_file_reader): File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__ return self.get_chunk() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk return self.read(nrows=size) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1923, in read ) = self._engine.read( # type: ignore[attr-defined] File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read chunks = self._reader.read_low_memory(nrows) File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Expected 7 fields in line 6, saw 22
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Reddit Popular Dataset
Dataset of 10000 posts which appeared on /r/popular on Reddit.
Dataset Details
The Reddit API limits how many posts one can retrieve from a specific subreddit to 1000. This dataset contains data for almost all posts which appeared on /r/popular from Saturday, July 27, 2024 9:23:51 PM GMT to Saturday, August 24, 2024 9:48:19 PM GMT.
Additional data such as comments, scores, and media were obtained by Friday, November 15, 2024 5:00:00 AM GMT.
The Media Directory
This is a dump of all media in the dataset. It contains only PNGs.
ID Files
This dataset contains 2 files for identification: main.csv and media.csv.
main.csv Fields
main.csv includes metadata and text data about the post:
- post_id: int - A unique, dataset-specific identifier for each post.
- create_utc: int - The time (in seconds) the post was created, in epoch time.
- post_url: string - The URL of the post. This can be used to collect further data depending on your purposes.
- title: string - Title of the post.
- comment[1-3]: string|nan - The text of the i-th top-scoring comment.
- comment[1-3]_score: int|nan - The score of the i-th top-scoring comment.
media.csv Fields
media.csv includes identifiers for media:
- post_id: int - Identifies the post the media is associated to. Refers to post_id in main.csv
- media_path: str - Locates the file containing the media. This path is relative to media.csv's directory.
Data Collection
Every 2 hours, a routine scraped 200 posts from /r/popular through the Reddit API then saved the URL of every post to a database from about July 27, 2024 to August 24, 2024.
The script collect_all_reddit.py then created the dataset on November 15, 2024.
Usage Guide
This guide uses pandas and PIL to load data:
import pandas as pd
import csv
from PIL import Image
Load the main and media data using
df_main = pd.read_csv("main.csv", sep="\t", quoting=csv.QUOTE_NONE)
df_media = pd.read_csv("media.csv", sep="\t", quoting=csv.QUOTE_NONE)
To create a combined language-image dataset, use an SQL-Like join:
df_lang_img = pd.merge(df_main, df_media, how="left", on="post_id")
This creates a new dataframe with all the columns from main.csv and media.csv. In this new dataframe, each post is repeated for each associated image. If a post does not have an image, the media_path is NaN.
Let's consider one row:
row = df_lang_img.iloc[0]
Then the image can be loaded with
with Image.open(row["media_path"]) as im:
im.show()
- Downloads last month
- 17