Datasets:
File size: 3,335 Bytes
231a52f b1be79d b6f20f7 5810bbc b1be79d a180b7d b1be79d 015ac88 b1be79d 76a058b 7e75aa2 edfc230 08a87f7 edfc230 68cfd02 7e75aa2 763cee4 08a87f7 82e30f2 e2f0498 3cad699 763cee4 73994a9 4162853 7e75aa2 601d944 7e75aa2 2e6f070 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: 'probability_words_nli'
paperwithcoode_id: probability-words-nli
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- multiple-choice
- question-answering
task_ids:
- open-domain-qa
- multiple-choice-qa
- natural-language-inference
- multi-input-text-classification
tags:
- wep
- words of estimative probability
- probability
- logical reasoning
- soft logic
- nli
- verbal probabilities
- natural-language-inference
- reasoning
- logic
train-eval-index:
- config: usnli
task: text-classification
task_id: multi-class-classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: context
sentence2: hypothesis
label: label
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
- config: reasoning-1hop
task: text-classification
task_id: multi-class-classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: context
sentence2: hypothesis
label: label
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
- config: reasoning-2hop
task: text-classification
task_id: multi-class-classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: context
sentence2: hypothesis
label: label
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
---
# Dataset accompanying the "Probing neural language models for understanding of words of estimative probability" article
This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP, also called verbal probabilities), e.g. words like "probably", "maybe", "surely", "impossible".
We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities according to [Fagen-Ulmschneider, 2018](https://github.com/wadefagen/datasets/tree/master/Perception-of-Probability-Words).
The dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).
Code : [colab](https://colab.research.google.com/drive/10ILEWY2-J6Q1hT97cCB3eoHJwGSflKHp?usp=sharing)
# Citation
https://arxiv.org/abs/2211.03358
```bib
@inproceedings{sileo-moens-2023-probing,
title = "Probing neural language models for understanding of words of estimative probability",
author = "Sileo, Damien and
Moens, Marie-francine",
booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.starsem-1.41",
doi = "10.18653/v1/2023.starsem-1.41",
pages = "469--476",
}
```
|