c4-en-tokenized / README.md
agentlans's picture
Update README.md
24931c5 verified
metadata
license: odc-by
task_categories:
  - text-generation
  - text2text-generation
language:
  - en

C4 English Tokenized Samples

This dataset contains tokenized English samples from the C4 (Colossal Clean Crawled Corpus) dataset for natural language processing (NLP) tasks.

The first 125 000 entries from the en split of allenai/c4 were tokenized using spaCy's en_core_web_sm model. Tokens joined with spaces.

Features

  • text: Original text from C4
  • tokenized: The tokenized and space-joined text
  • num_tokens: Number of tokens after tokenization
  • num_punct_tokens: Number of punctuation tokens after tokenization

Example

{
  "text": "ALDUS MANUTIUS AND HIS THESAURUS CORNUCOPIAE OF 1496.\nSyracuse (1958) . 7.5 x 4.25, cloth, 32 pp, a v.g. copy [...]",
  "tokenized": "ALDUS MANUTIUS AND HIS THESAURUS CORNUCOPIAE OF 1496 . \n Syracuse ( 1958 ) . 7.5 x 4.25 , cloth , 32 pp , a v.g . copy [...]",
  "num_tokens": 84,
  "num_punct_tokens": 19
}

Usage

This dataset can be useful for:

  • Text classification tasks
  • Language modeling
  • Sentiment analysis
  • Other NLP applications requiring tokenized English text

Researchers and developers can use this dataset to jumpstart their projects without the need for initial tokenization steps.

Licence

This dataset is licensed under the ODC-BY (Open Data Commons Attribution) licence.