Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
websrc / README.md
hheiden-roots's picture
Update README.md
fe36dbe verified
---
language:
- en
license: mit
size_categories:
- 100K<n<1M
task_categories:
- question-answering
pretty_name: WebSRC
dataset_info:
features:
- name: domain
dtype: string
- name: page_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image
dtype: string
splits:
- name: train
num_bytes: 18678357896
num_examples: 307315
- name: dev
num_bytes: 3401184457
num_examples: 52826
download_size: 536103491
dataset_size: 22079542353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
tags:
- webdataset
---
# Dataset Card for WebSRC v1.0
WebSRC v1.0 is a dataset for reading comprehension on structural web pages.
The task is to answer questions about web pages, which requires a system to have a comprehensive understanding of the spatial structure and logical structure.
WebSRC consists of 6.4K web pages and 400K question-answer pairs about web pages.
This cached copy of the dataset is focused on Q&A using the web screenshots (HTML and other metadata are omitted).
Questions in WebSRC were created for each segment.
Answers are either text spans from web pages or yes/no.
For more details, please refer to the paper [WebSRC: A Dataset for Web-Based Structural Reading Comprehension](https://arxiv.org/abs/2101.09465). The Leaderboard of WebSRC v1.0 can be found [here](https://x-lance.github.io/WebSRC/#).
The original [GitHub Repo](https://github.com/X-LANCE/WebSRC-Baseline/tree/master?tab=readme-ov-file) is also available.
This flat version of the dataset was specifically compiled to aid Large Multimodal Model (LMM) development, especially in digital domains that need to reason about screens.
## Structure
- `domain`: str, broad category of the website
- `page_id`: str, unique ID for the particular page
- `question`: str, the question to answer
- `answer`: str, the actual answer
- `image`: str, a base64 encoded binary string of the image.
The `image` is converted back to a PIL.Image with:
```python
import base64
import io
def decode_base64_to_image(base64_string):
img_data = base64.b64decode(base64_string)
img = Image.open(io.BytesIO(img_data))
return img
```
## Data Statistics
Questions are roughly divided into three categories: KV,
Compare, and Table. The detailed definitions can be found in the original
[paper](https://arxiv.org/abs/2101.09465). The numbers of websites, webpages,
and QAs corresponding to the three categories are as follows:
Type | # Websites | # Webpages | # QAs
---- | ---------- | ---------- | -----
KV | 34 | 3,207 | 168,606
Comparison | 15 | 1,339 | 68,578
Table | 21 | 1,901 | 163,314
The statistics of the dataset splits are as follows:
Split | # Websites | # Webpages | # QAs
----- | ---------- | ---------- | -----
Train | 50 | 4,549 | 307,315
Dev | 10 | 913 | 52,826
Test | 10 | 985 | 40,357
Note: The test split is _not_ included in this upload. See the original repo for compiling the test set, and how to obtain scores for the test split via submission.
## Reference
If you use any source codes or datasets included in this repository in your work,
please cite the corresponding papers. The bibtex are listed below:
```text
@inproceedings{chen-etal-2021-websrc,
title = "{W}eb{SRC}: A Dataset for Web-Based Structural Reading Comprehension",
author = "Chen, Xingyu and
Zhao, Zihan and
Chen, Lu and
Ji, JiaBao and
Zhang, Danyang and
Luo, Ao and
Xiong, Yuxuan and
Yu, Kai",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.343",
pages = "4173--4185",
abstract = "Web search is an essential way for humans to obtain information, but it{'}s still a great challenge for machines to understand the contents of web pages. In this paper, we introduce the task of web-based structural reading comprehension. Given a web page and a question about it, the task is to find an answer from the web page. This task requires a system not only to understand the semantics of texts but also the structure of the web page. Moreover, we proposed WebSRC, a novel Web-based Structural Reading Comprehension dataset. WebSRC consists of 400K question-answer pairs, which are collected from 6.4K web pages with corresponding HTML source code, screenshots, and metadata. Each question in WebSRC requires a certain structural understanding of a web page to answer, and the answer is either a text span on the web page or yes/no. We evaluate various strong baselines on our dataset to show the difficulty of our task. We also investigate the usefulness of structural information and visual features. Our dataset and baselines have been publicly available.",
}
```
## Dataset Compilation
Hunter Heidenreich; hunter (DOT) heidenreich _at_ rootsautomation *dot* com