Datasets:
File size: 3,601 Bytes
07462cf e25ca88 3b4ab6e e25ca88 c660897 e25ca88 bd24481 0eb58fe e25ca88 b9ec361 e25ca88 4ed7e80 b9ec361 e25ca88 957b450 e25ca88 0eb58fe f75a317 0eb58fe f75a317 0eb58fe f75a317 0eb58fe f75a317 289dfe3 f75a317 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
- other
license:
- mit
multilinguality:
- monolingual
pretty_name: MedQA Textbook (English) Corpus
size_categories:
- 10K<n<100K
source_datasets:
- extended|medmcqa
tags:
- medical
- clinical medicine
- biology
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for MedQA English Textbooks
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
![image/png](https://huggingface.co/datasets/cogbuji/medqa_corpus_en/resolve/main/shelves.png?download=true)
## Dataset Description
### Dataset Summary
[MedQA](https://github.com/jind11/MedQA) includes
> prepared text materials from a total of 18 English medical textbooks that have been widely used by medical students and USMLE takers" [Jin, Di, et al. 2020].
This dataset is derived from these medical textbooks (those in English), providing subsets that coincide with Medical
subspecialties for use in pre-training medical LLMs with gold standard domain text.
### Languages
English
## Dataset Structure
### Data Instances
Records have the following structure
```json
{"text": "The manifestations of acute intestinal obstruction depend on the nature of the underlying [..]",
"source": "textbooks/en/InternalMed_Harrison.txt"}
```
## Dataset Creation
### Curation Rationale
The MedQA dataset includes raw text corpus that is excluded from most of its derivations and the raw text is
valuable for pre-training of medical LLMS.
### Source Data
#### Initial Data Collection and Normalization
Langchain's RecursiveCharacterTextSplitter is used for chunking and the most commonly-appearing non-ASCII characters
are replaced with readable equivalents. The textbooks are then broken into separate subsets, indicated below along with
the textbooks they comprise:
- Core Clinical Medicine (_*core_clinical*_)
- Anatomy_Gray.txt, First_Aid_Step1.txt, First_Aid_Step2.txt, Immunology_Janeway.txt, InternalMed_Harrison.txt, Neurology_Adams.txt, Obstentrics_Williams.txt, Pathoma_Husain.txt, Pediatrics_Nelson.txt, and Surgery_Schwartz.txt
- Basic Biology (_*basic_biology*_)
- Biochemistry_Lippincott.txt, Cell_Biology_Alberts.txt, Histology_Ross.txt, Pathology_Robbins.txt, and Physiology_Levy.txt
- Pharmacology (_*pharmacology*_)
- Pharmacology_Katzung.txt
- Psychiatry (_*psychiatry*_)
- Psichiatry_DSM-5.txt
So, you can load the basic biology subset of the corpus via:
```python
In [1]: import datasets
In [2]: ds = datasets.load_dataset('cogbuji/medqa_corpus_en', 'basic_biology')
Generating train split: 50386 examples [00:00, 92862.56 examples/s]
In [3]: ds
Out[3]:
DatasetDict({
train: Dataset({
features: ['text', 'source'],
num_rows: 50386
})
})
``` |