|
--- |
|
language: |
|
- en |
|
license: mit |
|
size_categories: |
|
- 10K<n<100K |
|
pretty_name: siqa |
|
tags: |
|
- multiple-choice |
|
- benchmark |
|
- evaluation |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: eval |
|
path: data/eval-* |
|
- split: train |
|
path: data/train-* |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: int32 |
|
- name: question |
|
dtype: string |
|
- name: choices |
|
sequence: string |
|
- name: answerID |
|
dtype: int32 |
|
splits: |
|
- name: eval |
|
num_bytes: 380631 |
|
num_examples: 1954 |
|
- name: train |
|
num_bytes: 6460849 |
|
num_examples: 33410 |
|
download_size: 3900341 |
|
dataset_size: 6841480 |
|
--- |
|
|
|
# siqa Dataset |
|
|
|
## Dataset Information |
|
- **Original Hugging Face Dataset**: `lighteval/siqa` |
|
- **Subset**: `default` |
|
- **Evaluation Split**: `validation` |
|
- **Training Split**: `train` |
|
- **Task Type**: `multiple_choice` |
|
- **Processing Function**: `process_siqa` |
|
|
|
## Processing Function |
|
The following function was used to process the dataset from its original source: |
|
```python |
|
def process_siqa(example: Dict) -> Tuple[str, List[str], int]: |
|
"""Process SocialIQA dataset example.""" |
|
query = f"{example['context']} {example['question']}" |
|
|
|
# Get the original choices |
|
original_choices = [example['answerA'], example['answerB'], example['answerC']] |
|
correct_answer = original_choices[int(example["label"]) - 1] # Convert 1-based index to 0-based |
|
|
|
# Find the new index of the correct answer after shuffling |
|
answer_index = original_choices.index(correct_answer) |
|
|
|
return query, original_choices, answer_index |
|
|
|
``` |
|
## Overview |
|
This repository contains the processed version of the siqa dataset. The dataset is formatted as a collection of multiple-choice questions. |
|
|
|
## Dataset Structure |
|
Each example in the dataset contains the following fields: |
|
```json |
|
{ |
|
"id": 0, |
|
"question": "Tracy didn't go home that evening and resisted Riley's attacks. What does Tracy need to do before this?", |
|
"choices": [ |
|
"make a new plan", |
|
"Go home and see Riley", |
|
"Find somewhere to go" |
|
], |
|
"answerID": 2 |
|
} |
|
``` |
|
|
|
## Fields Description |
|
- `id`: Unique identifier for each example |
|
- `question`: The question or prompt text |
|
- `choices`: List of possible answers |
|
- `answerID`: Index of the correct answer in the choices list (0-based) |
|
|
|
## Loading the Dataset |
|
You can load this dataset using the Hugging Face datasets library: |
|
```python |
|
from datasets import load_dataset |
|
|
|
# Load the dataset |
|
dataset = load_dataset("DatologyAI/siqa") |
|
|
|
# Access the data |
|
for example in dataset['train']: |
|
print(example) |
|
``` |
|
|
|
## Example Usage |
|
```python |
|
# Load the dataset |
|
dataset = load_dataset("DatologyAI/siqa") |
|
|
|
# Get a sample question |
|
sample = dataset['train'][0] |
|
|
|
# Print the question |
|
print("Question:", sample['question']) |
|
print("Choices:") |
|
for idx, choice in enumerate(sample['choices']): |
|
print(f"{idx}. {choice}") |
|
print("Correct Answer:", sample['choices'][sample['answerID']]) |
|
``` |
|
|