bge-full-data / README.md
nthakur's picture
Upload dataset (part 00002-of-00003)
241b5c5 verified
metadata
dataset_info:
  features:
    - name: query_id
      dtype: string
    - name: query
      dtype: string
    - name: positive_passages
      list:
        - name: docid
          dtype: string
        - name: text
          dtype: string
        - name: title
          dtype: string
    - name: negative_passages
      list:
        - name: docid
          dtype: string
        - name: text
          dtype: string
        - name: title
          dtype: string
    - name: subset
      dtype: string
  splits:
    - name: train
      num_bytes: 101651201606
      num_examples: 1602667
  download_size: 57281610524
  dataset_size: 101651201606
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - text-retrieval
size_categories:
  - 10M<n<100M

BGE Training Dataset (Only Retrieval Datasets)

This is a ported version of the original cfli/bge-full-data into Tevatron format containing the following 15 retrieval splits on HF:

# Splits used for training
'sts',
'msmarco_passage',
'hotpotqa',
'msmarco_document',
'nli',
'eli5',
'squad',
'fiqa',
'nq',
'arguana',
'trivial',
'fever',
'quora',
'stack_overflow_dup_questions',
'scidocsrr'

Note (Postprocessing Updates)

  • We pushed the whole document available in the original dataset into the text field, so the title field is empty.
  • The original document or query IDs were unavailable, so we created a unique query and document ID by computing the md5 hash of the text.
import hashlib

def get_md5_hash(text):
  """Calculates the MD5 hash of a given string.

  Args:
    text: The string to hash.

  Returns:
    The MD5 hash of the string as a hexadecimal string.
  """

  text_bytes = text.encode('utf-8')  # Encode the string to bytes
  md5_hash = hashlib.md5(text_bytes).hexdigest()
  return md5_hash

Please refer to cfli/bge-full-data for details and the License.