sileod commited on
Commit
3cad699
1 Parent(s): 27d9d5d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -87,10 +87,11 @@ train-eval-index:
87
  This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP), e.g. words like "probably", "maybe", "surely", "impossible".
88
 
89
  We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities.
90
- The dataset can be used as natural langauge inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).
 
91
 
 
92
 
93
- Accepted at Starsem2023 (The 12th Joint Conference on Lexical and Computational Semantics). Temporary citation:
94
  ```bib
95
  @article{sileo2022probing,
96
  title={Probing neural language models for understanding of words of estimative probability},
 
87
  This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP), e.g. words like "probably", "maybe", "surely", "impossible".
88
 
89
  We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities.
90
+ The dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).
91
+ Code : https://colab.research.google.com/drive/10ILEWY2-J6Q1hT97cCB3eoHJwGSflKHp?usp=sharing
92
 
93
+ *Accepted at Starsem2023* (The 12th Joint Conference on Lexical and Computational Semantics). Temporary citation:
94
 
 
95
  ```bib
96
  @article{sileo2022probing,
97
  title={Probing neural language models for understanding of words of estimative probability},