Datasets:
annotations_creators:
- expert-generated
language:
- ja
- en
language_creators:
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- translation
pretty_name: JSICK
size_categories:
- 1K<n<10K
source_datasets:
- extended|sick
tags:
- semantic-textual-similarity
- sts
task_categories:
- sentence-similarity
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-scoring
Dataset Card for JaNLI
Table of Contents
Dataset Description
- Homepage: https://github.com/verypluming/JSICK
- Repository: https://github.com/verypluming/JSICK
- Paper: https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual
- Paper: https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_pdf/-char/ja
Dataset Summary
From official GitHub:
Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset. JSICK is the Japanese NLI and STS dataset by manually translating the English dataset SICK (Marelli et al., 2014) into Japanese. We hope that our dataset will be useful in research for realizing more advanced models that are capable of appropriately performing multilingual compositional inference.
Languages
The language data in JSICK is in Japanese and English.
Dataset Structure
Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
import datasets as ds
dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'heuristics', 'number_of_NPs', 'semtag'],
# num_rows: 13680
# })
# test: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'heuristics', 'number_of_NPs', 'semtag'],
# num_rows: 720
# })
# })
dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick", name="original")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['id', 'sentence_A_Ja', 'sentence_B_Ja', 'entailment_label_Ja', 'heuristics', 'number_of_NPs', 'semtag'],
# num_rows: 13680
# })
# test: Dataset({
# features: ['id', 'sentence_A_Ja', 'sentence_B_Ja', 'entailment_label_Ja', 'heuristics', 'number_of_NPs', 'semtag'],
# num_rows: 720
# })
# })
base
An example of looks as follows:
{
'id': 12,
'premise': 'θ₯θ
γγγγγγΌγ«ιΈζγθ¦γ¦γγ',
'hypothesis': 'γγγγγΌγ«ιΈζγθ₯θ
γθ¦γ¦γγ',
'label': 0,
'heuristics': 'overlap-full',
'number_of_NPs': 2,
'semtag': 'scrambling'
}
original
An example of looks as follows:
{
'id': 12,
'sentence_A_Ja': 'θ₯θ
γγγγγγΌγ«ιΈζγθ¦γ¦γγ',
'sentence_B_Ja': 'γγγγγΌγ«ιΈζγθ₯θ
γθ¦γ¦γγ',
'entailment_label_Ja': 0,
'heuristics': 'overlap-full',
'number_of_NPs': 2,
'semtag': 'scrambling'
}
Data Fields
base
A version adopting the column names of a typical NLI dataset.
id
: The number of the sentence pair.premise
: The premise (sentence_A_Ja).hypothesis
: The hypothesis (sentence_B_Ja).label
: The correct label for this sentence pair (eitherentailment
ornon-entailment
); in the setting described in the paper, non-entailment = neutral + contradiction (entailment_label_Ja).heuristics
: The heuristics (structural pattern) tag. The tags are: subsequence, constituent, full-overlap, order-subset, and mixed-subset.number_of_NPs
: The number of noun phrase in a sentence.semtag
: The linguistic phenomena tag.
original
The original version retaining the unaltered column names.
id
: The number of the sentence pair.sentence_A_Ja
: The premise.sentence_B_Ja
: The hypothesis.entailment_label_Ja
: The correct label for this sentence pair (eitherentailment
ornon-entailment
); in the setting described in the paper, non-entailment = neutral + contradictionheuristics
: The heuristics (structural pattern) tag. The tags are: subsequence, constituent, full-overlap, order-subset, and mixed-subset.number_of_NPs
: The number of noun phrase in a sentence.semtag
: The linguistic phenomena tag.
Data Splits
name | train | validation | test |
---|---|---|---|
base | 13,680 | 720 | |
original | 13,680 | 720 |
Annotations
The annotation process for this Japanese NLI dataset involves tagging each pair (P, H) of a premise and hypothesis with a label for structural pattern and linguistic phenomenon. The structural relationship between premise and hypothesis sentences is classified into five patterns, with each pattern associated with a type of heuristic that can lead to incorrect predictions of the entailment relation. Additionally, 11 categories of Japanese linguistic phenomena and constructions are focused on for generating the five patterns of adversarial inferences.
For each linguistic phenomenon, a template for the premise sentence P is fixed, and multiple templates for hypothesis sentences H are created. In total, 144 templates for (P, H) pairs are produced. Each pair of premise and hypothesis sentences is tagged with an entailment label (entailment or non-entailment), a structural pattern, and a linguistic phenomenon label.
The JaNLI dataset is generated by instantiating each template 100 times, resulting in a total of 14,400 examples. The same number of entailment and non-entailment examples are generated for each phenomenon. The structural patterns are annotated with the templates for each linguistic phenomenon, and the ratio of entailment and non-entailment examples is not necessarily 1:1 for each pattern. The dataset uses a total of 158 words (nouns and verbs), which occur more than 20 times in the JSICK and JSNLI datasets.
Additional Information
- verypluming/JaNLI
- Hitomi Yanaka, Koji Mineshima, Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference, Proceedings of the 2021 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP2021), 2021.
Licensing Information
CC BY-SA 4.0
Citation Information
@InProceedings{yanaka-EtAl:2021:blackbox,
author = {Yanaka, Hitomi and Mineshima, Koji},
title = {Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference},
booktitle = {Proceedings of the 2021 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP2021)},
url = {https://aclanthology.org/2021.blackboxnlp-1.26/},
year = {2021},
}
Contributions
Thanks to Hitomi Yanaka and Koji Mineshima for creating this dataset.