Datasets:
metadata
language:
- en
- multilingual
size_categories:
- 10M<n<100M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: WikiTitles
tags:
- sentence-transformers
dataset_info:
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 755332378
num_examples: 14700458
download_size: 685053033
dataset_size: 755332378
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Card for Parallel Sentences - WikiTitles
This dataset contains parallel sentences (i.e. English sentence + the same sentences in another language) for numerous other languages. Most of the sentences originate from the OPUS website. In particular, this dataset contains the WikiTitles dataset.
Related Datasets
The following datasets are also a part of the Parallel Sentences collection:
- parallel-sentences-europarl
- parallel-sentences-global-voices
- parallel-sentences-muse
- parallel-sentences-jw300
- parallel-sentences-news-commentary
- parallel-sentences-opensubtitles
- parallel-sentences-talks
- parallel-sentences-tatoeba
- parallel-sentences-wikimatrix
- parallel-sentences-wikititles
These datasets can be used to train multilingual sentence embedding models. For more information, see sbert.net - Multilingual Models.
Dataset Stats
- Columns: "english", "non_english"
- Column types:
str
,str
- Examples:
- Collection strategy: Processing the raw data from parallel-sentences and formatting it in Parquet.
- Deduplified: No