Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
lewtun HF staff commited on
Commit
216f16b
2 Parent(s): 67d5b13 401a4da

Merge branch 'main' of https://huggingface.co/datasets/SetFit/amazon_counterfactual

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Amazon Multilingual Counterfactual Dataset
2
+
3
+ The dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false).
4
+
5
+ The key features of this dataset are:
6
+
7
+ * The dataset is multilingual and contains sentences in English, German, and Japanese.
8
+ * The labeling was done by professional linguists and high quality was ensured.
9
+ * The dataset is supplemented with the annotation guidelines and definitions, which were worked out by professional linguists. We also provide the clue word lists, which are typical for counterfactual sentences and were used for initial data filtering. The clue word lists were also compiled by professional linguists.
10
+
11
+ Please see the [paper](https://arxiv.org/abs/2104.06893) for the data statistics, detailed description of data collection and annotation.
12
+
13
+
14
+ GitHub repo URL: https://github.com/amazon-research/amazon-multilingual-counterfactual-dataset
15
+
16
+ ## Usage
17
+
18
+ You can load each of the languages as follows:
19
+
20
+ ```
21
+ from datasets import get_dataset_config_names
22
+
23
+ dataset_id = "SetFit/amazon_counterfactual"
24
+ # Returns ['all_languages', 'de', 'en', 'jp']
25
+ configs = get_dataset_config_names(dataset_id)
26
+ # Load English subset
27
+ dset = load_dataset(dataset_id, name="en")
28
+ # Load all languages
29
+ dset = load_dataset(dataset_id, name="all_languages")
30
+ ```