WikiHowNFQA / README.md
Lurunchik's picture
fix dataset parts name
a4b1e67
|
raw
history blame
No virus
8.65 kB
metadata
license: cc-by-4.0
task_categories:
  - question-answering
language:
  - en
tags:
  - multi-document NFQA
  - non-factoid QA
pretty_name: wikihowqa
size_categories:
  - 10K<n<100K

Dataset Card for WikiHowQA

Table of Contents

Dataset Description

WikiHowQA is a unique collection of 'how-to' content from WikiHow, transformed into a rich dataset featuring 11,746 human-authored answers and 74,527 supporting documents. Designed for researchers, it presents a unique opportunity to tackle the challenges of creating comprehensive answers from multiple documents, and grounding those answers in the real-world context provided by the supporting documents.

Dataset Structure

Data Fields

  • article_id: An integer identifier for the article corresponding to article_id from WikHow API.
  • question: The non-factoid instructional question.
  • answer: The human-written answer to the question corresponding human-written answer article summary from WikiHow website.
  • related_document_urls_wayback_snapshots: A list of URLs to web archive snapshots of related documents corresponding references from WikiHow article.
  • split: The split of the dataset that the instance belongs to ('train', 'validation', or 'test').
  • cluster: An integer identifier for the cluster that the instance belongs to.

Data Instances

An example instance from the WikiHowQA dataset:

{
  'article_id': 1353800,
  'question': 'How To Cook Pork Tenderloin',
  'answer': 'To cook pork tenderloin, put it in a roasting pan and cook it in the oven for 55 minutes at 400 degrees Fahrenheit, turning it over halfway through. You can also sear the pork tenderloin on both sides in a skillet before putting it in the oven, which will reduce the cooking time to 15 minutes. If you want to grill pork tenderloin, start by preheating the grill to medium-high heat. Then, cook the tenderloin on the grill for 30-40 minutes over indirect heat, flipping it occasionally.',
  'related_document_urls_wayback_snapshots': ['http://web.archive.org/web/20210605161310/https://www.allrecipes.com/recipe/236114/pork-roast-with-the-worlds-best-rub/', 'http://web.archive.org/web/20210423074902/https://www.bhg.com/recipes/how-to/food-storage-safety/using-a-meat-thermometer/', ...],
  'split': 'train',
  'cluster': 2635
}

Dataset Statistics

  • Number of human-authored answers: 11,746
  • Number of supporting documents: 74,527
  • Average number of documents per question: 6.3
  • Average number of sentences per answer: 3.9

Dataset Information

The WikiHowQA dataset is divided into two parts: the QA part and the Document Content part. The QA part of the dataset contains questions, answers, and only links to web archive snapshots of related HTML pages and can be downloaded here. The Document Content part contains parsed HTML content and is accessible by request and signing a Data Transfer Agreement with RMIT University.

Each dataset instance includes a question, a set of related documents, and a human-authored answer. The questions are non-factoid, requiring comprehensive, multi-sentence answers. The related documents provide the necessary information to generate an answer.

Dataset Usage

The dataset is designed for researchers and presents a unique opportunity to tackle the challenges of creating comprehensive answers from multiple documents, and grounding those answers in the real-world context provided by the supporting documents.

Additional Information

Dataset Curators

The WikiHowQA dataset was curated by researchers at RMIT University.

Licensing Information

The QA dataset part is distributed under the Creative Commons Attribution (CC BY) license. The Dataset Content part containing parsed HTML content is accessible by request and signing a Data Transfer Agreement with RMIT University, which allows using the dataset freely for research purposes. The form to download and sign is available on the dataset website by the link [].

Citation Information

Please cite the following paper if you use this dataset:

@inproceedings{bolotova2023wikihowqa,
      title={WikiHowQA: A Comprehensive Benchmark for Multi-Document Non-Factoid Question Answering}, 
      author={Bolotova, Valeriia and Blinov, Vladislav and Filippova, Sofya and Scholer, Falk and Sanderson, Mark},
      booktitle="Proceedings of the 61th Conference of the Association for Computational Linguistics",
      year={2023}
}

Considerations for Using the Data

Social Impact of the Dataset

The WikiHowQA dataset is a rich resource for researchers interested in question answering, information retrieval, and natural language understanding tasks. It can help in developing models that provide comprehensive answers to how-to questions, which can be beneficial in various applications such as customer support, tutoring systems, and personal assistants. However, as with any dataset, the potential for misuse or unintended consequences exists. For example, a model trained on this dataset might be used to generate misleading or incorrect answers if not properly validated.

Discussion of Biases

The WikiHowQA dataset is derived from WikiHow, a community-driven platform. While WikiHow has guidelines to ensure the quality and neutrality of its content, biases could still be present due to the demographic and ideological characteristics of its contributors. Users of the dataset should be aware of this potential bias.

Other Known Limitations

The dataset only contains 'how-to' questions and their answers. Therefore, it may not be suitable for tasks that require understanding of other types of questions (e.g., why, what, when, who, etc.). Additionally, while the dataset contains a large number of instances, there may still be topics or types of questions that are underrepresented.

Data Loading

There are two primary ways to load the QA dataset part:

  1. Directly from the file (if you have the .jsonl file locally, you can load the dataset using the following Python code):
import json

dataset = []
with open('wikiHowNFQA.jsonl') as f:
    for l in f:
        dataset.append(json.loads(l))

This will result in a list of dictionaries, each representing a single instance in the dataset.

  1. From the Hugging Face Datasets Hub:

If the dataset is hosted on the Hugging Face Datasets Hub, you can load it directly using the datasets library:

from datasets import load_dataset
dataset = load_dataset('wikiHowNFQA')

This will return a DatasetDict object, which is a dictionary-like object that maps split names (e.g., 'train', 'validation', 'test') to Dataset objects. You can access a specific split like so: dataset['train'].