File size: 2,147 Bytes
108c7cf 25a1645 108c7cf 25a1645 108c7cf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
## Overview
Original dataset is available on the HuggingFace Hub [here](https://huggingface.co/datasets/scitail).
## Dataset curation
This is the same as the `snli_format` split of the SciTail dataset available on the HuggingFace Hub (i.e., same data, same splits, etc).
The only differences are the following:
- selecting only the columns `["sentence1", "sentence2", "gold_label"]`
- renaming columns with the following mapping `{"sentence1": "premise", "sentence2": "hypothesis", "gold_label": "label"}`
- encoding labels with the following mapping `{"entailment": 0, "neutral": 1, "contradiction": 2}`
Note that there are 10 overlapping instances (as found by merging on columns "label", "premise", and "hypothesis") between
`train` and `test` splits.
## Code to create the dataset
```python
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset
# load datasets from the Hub
dd = load_dataset("scitail", "snli_format")
ds = {}
for name, df_ in dd.items():
df = df_.to_pandas()
# select important columns
df = df[["sentence1", "sentence2", "gold_label"]]
# rename columns
df = df.rename(columns={"sentence1": "premise", "sentence2": "hypothesis", "gold_label": "label"})
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
ds[name] = Dataset.from_pandas(df, features=features)
dataset = DatasetDict(ds)
dataset.push_to_hub("scitail", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(dataset.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
dataset[i].to_pandas(),
dataset[j].to_pandas(),
on=["label", "premise", "hypothesis"],
how="inner",
).shape[0],
)
#> train - test: 10
#> train - validation: 0
#> test - validation: 0
``` |