medqa_corpus_en / README.md
cogbuji's picture
Upload folder using huggingface_hub
289dfe3 verified
|
raw
history blame
No virus
3.6 kB
metadata
annotations_creators:
  - no-annotation
language:
  - en
language_creators:
  - found
  - other
license:
  - mit
multilinguality:
  - monolingual
pretty_name: MedQA Textbook (English) Corpus
size_categories:
  - 10K<n<100K
source_datasets:
  - extended|medmcqa
tags:
  - medical
  - clinical medicine
  - biology
task_categories:
  - text-generation
task_ids:
  - language-modeling

Dataset Card for MedQA English Textbooks

Table of Contents

image/png

Dataset Description

Dataset Summary

MedQA includes

prepared text materials from a total of 18 English medical textbooks that have been widely used by medical students and USMLE takers" [Jin, Di, et al. 2020].

This dataset is derived from these medical textbooks (those in English), providing subsets that coincide with Medical subspecialties for use in pre-training medical LLMs with gold standard domain text.

Languages

English

Dataset Structure

Data Instances

Records have the following structure

{"text": "The manifestations of acute intestinal obstruction depend on the nature of the underlying [..]", 
 "source": "textbooks/en/InternalMed_Harrison.txt"}

Dataset Creation

Curation Rationale

The MedQA dataset includes raw text corpus that is excluded from most of its derivations and the raw text is valuable for pre-training of medical LLMS.

Source Data

Initial Data Collection and Normalization

Langchain's RecursiveCharacterTextSplitter is used for chunking and the most commonly-appearing non-ASCII characters are replaced with readable equivalents. The textbooks are then broken into separate subsets, indicated below along with the textbooks they comprise:

  • Core Clinical Medicine (core_clinical)
    • Anatomy_Gray.txt, First_Aid_Step1.txt, First_Aid_Step2.txt, Immunology_Janeway.txt, InternalMed_Harrison.txt, Neurology_Adams.txt, Obstentrics_Williams.txt, Pathoma_Husain.txt, Pediatrics_Nelson.txt, and Surgery_Schwartz.txt
  • Basic Biology (basic_biology)
    • Biochemistry_Lippincott.txt, Cell_Biology_Alberts.txt, Histology_Ross.txt, Pathology_Robbins.txt, and Physiology_Levy.txt
  • Pharmacology (pharmacology)
    • Pharmacology_Katzung.txt
  • Psychiatry (psychiatry)
    • Psichiatry_DSM-5.txt

So, you can load the basic biology subset of the corpus via:

In [1]: import datasets
In [2]: ds = datasets.load_dataset('cogbuji/medqa_corpus_en', 'basic_biology')
Generating train split: 50386 examples [00:00, 92862.56 examples/s]
In [3]: ds 
Out[3]: 
DatasetDict({
    train: Dataset({
        features: ['text', 'source'],
        num_rows: 50386
    })
})