wikisource / README.md
Jeronymous's picture
Remove repeated headers
2600008
metadata
language:
  - fr
license:
  - cc-by-sa-4.0
task_categories:
  - text-generation
  - fill-mask
task_ids:
  - language-modeling
  - masked-language-modeling
configs:
  - config_name: default
    data_files:
      - split: train
        path: '*/20231201/*.parquet'
  - config_name: fr
    data_files:
      - split: train
        path: fr/20231201/*.parquet
  - config_name: sample
    data_files:
      - split: train
        path: fr/20231201/train-000000-of-000032.parquet
dataset_info:
  - config_name: fr
    features:
      - name: id
        dtype: int32
      - name: url
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 3274591809
        num_examples: 185700
    download_size: 1934221408
    dataset_size: 3274591809
  - config_name: sample
    features:
      - name: id
        dtype: int32
      - name: url
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 123744195
        num_examples: 5803
    download_size: 72062489
    dataset_size: 123744195

Plain text of Wikisource

Dataset Description

This dataset is a plain text version of pages from wikisource.org in French language. The text is without HTML tags nor wiki templates. It just includes markdown syntax for headers, lists and tables. See Notes on data formatting for more details.

It was created by LINAGORA and OpenLLM France from the Wikimedia dumps, using code in https://github.com/OpenLLM-France/wikiplaintext.

Size

The amount of data for the latest dump (20231201) is:

French
# documents 185 700
# paragraphs 585 700
# words 523 310 649
# characters 3 079 850 209
size on disk 1.9G

Example use (python)

Load the full dataset:

import datasets

ds = datasets.load_dataset("OpenLLM-France/wikisource", streaming=True, split="train")

Load only a small subset:

ds = datasets.load_dataset("OpenLLM-France/wikisource", "sample", split="train")

A version "repeated_headers" of the dataset is available, where headers are repeated before each section (see https://huggingface.co/datasets/OpenLLM-France/wikipedia#alternative-markdown-syntax). This dataset can be obtained with:

ds = datasets.load_dataset("OpenLLM-France/wikisource", revision="repeated_headers", split="train")

Data fields

The data fields are the same among all configurations:

  • id (int): ID of the page.
  • url (str): URL of the page.
  • title (str): Title of the page.
  • text (str): Text content of the page.

Notes on data formatting

see OpenLLM-France/wikipedia.fr

License

This dataset is distributed under the Creative Commons Attribution-ShareAlike 4.0 International License.

Aknowledgements

This dataset was created by Jérôme Louradour on behalf of LINAGORA and OpenLLM France.

Many thanks to the Wikimedia Foundation for providing the data and useful advices, in particular Isaac Johnson, Albert Villanova and Rémy Gerbet.

Citation

@online{wikisource_fr_dump,
    author = "Jérôme Louradour, OpenLLM-France, LINAGORA Labs",
    title  = "Plain text of Wikisource",
    url    = "https://huggingface.co/datasets/OpenLLM-France/wikisource"
}