squad-nl-v2.0 / README.md
Rijgersberg's picture
Update README.md
78157fc verified
metadata
dataset_info:
  features:
    - name: question
      dtype: string
    - name: context
      dtype: string
    - name: score
      dtype: float64
    - name: id
      dtype: string
    - name: title
      dtype: string
    - name: answers
      struct:
        - name: answer_start
          sequence: int64
        - name: text
          sequence: string
  splits:
    - name: train
      num_bytes: 127996360
      num_examples: 130319
    - name: dev
      num_bytes: 10772220
      num_examples: 10174
    - name: test
      num_bytes: 1792665
      num_examples: 1699
  download_size: 18702176
  dataset_size: 140561245
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: dev
        path: data/dev-*
      - split: test
        path: data/test-*
license: cc-by-sa-4.0
language:
  - nl
task_categories:
  - sentence-similarity
  - question-answering
tags:
  - sentence-transformers

SQuAD-NL v2.0 for Sentence Transformers

The SQuAD-NL v2.0 dataset, modified for use in Sentence Transformers as a dataset of type "Pair with Similarity Score".

Score

We added an extra column score to the original dataset. The value of score is 1.0 if the question has an answer in the context (no matter where), and 0.0 if there are no answers in the context. The allows the evaluation of embedding models that aim to pair queries and document fragments.

Translations

SQuAD-NL is translated from the original SQuAD and XQuAD English-language datasets. From the SQuAD-NL v2.0 Readme:

Split Source Procedure English Dutch
train SQuAD-train-v2.0 Google Translate 130,319 130,319
dev SQuAD-dev-v2.0 \ XQuAD Google Translate 10,174 10,174
test SQuAD-dev-v2.0 & XQuAD Google Translate + Human 1,699 1,699

For testing Dutch sentence embedding models it is therefore recommended to only use the test split. Also it would be advisable to not train your model on the other splits, because you would train answering this specific style of questions into the model.

Example code using Sentence Transformers

import pprint

from datasets import load_dataset
from sentence_transformers import SentenceTransformer
from sentence_transformers.evaluation import EmbeddingSimilarityEvaluator, SimilarityFunction


eval_dataset = load_dataset('NetherlandsForensicInstitute/squad-nl-v2.0', split='test')

evaluator = EmbeddingSimilarityEvaluator(
     sentences1=eval_dataset['question'],
     sentences2=eval_dataset['context'],
     scores=eval_dataset['score'],
     main_similarity=SimilarityFunction.COSINE,
     name="squad_nl_v2.0_test",
 )

model = SentenceTransformer('NetherlandsForensicInstitute/robbert-2022-dutch-sentence-transformers')

results = evaluator(model)
pprint.pprint(results)

Original dataset

SQuAD-NL is a derivative of the SQuAD and XQuAD datasets, and their original CC BY-SA 4.0 licenses apply.

Code used to generate this dataset

code
import json

import requests
from datasets import Dataset, DatasetDict


def squad(url):
  response = requests.get(url)

  rows = json.loads(response.text)['data']

  for row in rows:
      yield {'question': row['question'],
             'context': row['context'],
             'score': 1.0 if row['answers']['text'] else 0.,
             'id': row['id'],
             'title': row['title'],
             'answers': row['answers']}

if __name__ == '__main__':
  url = 'https://github.com/wietsedv/NLP-NL/raw/refs/tags/squad-nl-v1.0/SQuAD-NL/nl/{split}-v2.0.json'

  dataset = DatasetDict({
      split: Dataset.from_generator(squad, gen_kwargs={'url': url.format(split=split)})
      for split in ('train', 'dev', 'test')
  })

  dataset.push_to_hub('NetherlandsForensicInstitute/squad-nl-v2.0')