washi / README.md
Naive
Upload dataset (part 00001-of-00002)
911546e verified
metadata
language:
  - ja
license: other
size_categories:
  - 1M<n<10M
task_categories:
  - text-generation
pretty_name: Washi
dataset_info:
  - config_name: 200k
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 5315275997
        num_examples: 200000
    download_size: 2841685460
    dataset_size: 5315275997
  - config_name: 20m
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 105176099351
        num_examples: 20000000
    download_size: 60214844912
    dataset_size: 105176099351
  - config_name: 400m
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 24693584215
        num_examples: 4000000
    download_size: 14134783813
    dataset_size: 24693584215
  - config_name: 4m
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 24693584215
        num_examples: 4000000
    download_size: 14134783813
    dataset_size: 24693584215
configs:
  - config_name: 200k
    data_files:
      - split: train
        path: 200k/train-*
  - config_name: 20m
    data_files:
      - split: train
        path: 20m/train-*
  - config_name: 400m
    data_files:
      - split: train
        path: 400m/train-*
  - config_name: 4m
    data_files:
      - split: train
        path: 4m/train-*
tags:
  - nlp
  - pretrain
  - llm

Washi (a kind of traditional Japanese paper)

This dataset is sampled from a subset of ja (Japanese) sourced from uonlp/CulturaX. Utilizing DSIR (Data Selection for Language Models via Importance Resampling), documents closest to the Japanese subset of csebuetnlp/xlsum and systemk/aozorabunko_chunked (cleaned data from the Aozora Bunko collection, containing modern Japanese literature in the public domain) were selected, comprising approximately 5% of the corpus.

We have noted a qualitative leap in the quantity of low-quality Japanese datasets with the release of several multilingual datasets. However, traditional data cleaning methods for Japanese, based on blacklists and rules, continue to produce significant noise.

We speculate that, particularly in cases where fine-tune from predominantly English-focused Large Language Model (LLM), the quality of data outweighs its quantity. Hence, we have created this dataset to validate this hypothesis.

Dataset Details

Dataset Description

  • Language(s) (NLP): Japanese
  • License: ocd-by / cc0-1.0

License Information

The licence terms for Washi strictly follows uonlp/CulturaX. Please refer to both below licenses when using this dataset.

Uses

Direct Use

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Dataset Structure

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Data Collection and Processing

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Annotations [optional]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[More Information Needed]