The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'final' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column(/meta/title/[]/[]) changed from string to object in row 0
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 147512)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/meta/title/[]/[]) changed from string to object in row 0

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for LectureGratuits

Waifu to catch your attention.

Dataset Details

Dataset Description

LectureGratuits is a cleaned dataset of Ebooks Gratuits books. We downloaded all the publicly available ebooks books at the time and processed them.
Filtering to a total amount of tokens of ~265.46M (llama-2-7b-chat-tokenizer) / ~253.51M (RWKV Tokenizer) from primarily English language.

  • Curated by: Darok
  • Funded by: Recursal.ai
  • Shared by: KaraKaraWitch
  • Language(s) (NLP): English
  • License: Public domain

Dataset Sources

Processing

KaraKaraWitch doesn't have specifics on how it's processed. We have postiluated the following workflow / processing:

  1. Get the higher ID
  2. Enumerate and download all the epub files: https://www.ebooksgratuits.com/newsendbook.php?id=<ID>&format=epub
  3. Put them in a folder called books
  4. extract content to each json file in output folder. (See filtering steps in extract-text.py)
  5. Combine into a single file.

Data Keys

text (str): The book's text. Converted to markdown.
meta (dict): A dictionary of metadata with the following keys:
  - title
  - author
  - publisher

Dataset Curators

This dataset was mainly Darok's work. I (KaraKaraWitch) only assisted them with questions and the writing of the dataset card.

Licensing Information

The books itself is in public domain. For the post processed data under Recursal work, it's licensed as CC-BY-SA.

Recursal Waifus (The banner image) are licensed under CC-BY-SA. They do not represent the related websites in any official capacity unless otherwise or announced by the website. You may use them as a banner image. However, you must always link back to the dataset.

Citation Information

@ONLINE{lecturegratuits,
  title         = {LectureGratuits},
  author        = {Darok, KaraKaraWitch, recursal.ai},
  year          = {2024},
  howpublished  = {\url{https://huggingface.co/datasets/recursal/Recursalberg}},
}
Downloads last month
3