Datasets:
ncbi
/

Modalities:
Text
Formats:
json
Libraries:
Datasets
pandas
License:
Open-Patients / README.md
nsk7153's picture
Upload README.md
dc1009f verified
|
raw
history blame
4.29 kB
---
license: cc-by-sa-4.0
tags:
- medical
---
Open-Patients is an aggregated dataset of public patient notes from four open-source datasets of public patient notes.
There are a total of 180,142 patient descriptions from these four datasets. These descriptions are all provided in the `Open-Patients.jsonl` file. For each item in the dataset, there are two attributes:
1. `_id` - tells which dataset did an item come from along with the index number of the item from the dataset.
2. `description` - the exact patient note extracted from a public dataset of patient notes
The patient notes and questions come from the following four datasets:
1. `Text REtrieval Conference (TREC) Clinical Decision Support (CDS) track`. This track consists of datasets of 30 patient notes each for
three separate years from 2014-2016. The motivation of this track was to challenge participants to obtain relevant articles that
can help answer potential questions for a particular patient note. The patient notes [2014](https://www.trec-cds.org/2014.html) and [2015](https://www.trec-cds.org/2014.html) are synthetic patient notes hand-written
by individuals with medical training, but the [2016](https://www.trec-cds.org/2016.html) dataset consists of real patient summaries coming from electronic health records.
The `_id` for these notes is specified by the following structure: trec-cds-{year}-{note number}, where year is between 2014 and 2016,
and the 'note number' is the index number of the note from the dataset for a particular year.
2. `Text REtrieval Conference (TREC) Clinical Trials (CT) track`. This track consists of 125 patient notes, where [50 notes](https://www.trec-cds.org/2021.html) are from the
year of 2021 and [75 notes](https://www.trec-cds.org/2022.html) are from the year of 2022. This track was meant to have participants retrieve previous clinical trials from
ClinicalTrials.gov that best match the symptoms described in the patient note. The notes from both tracks are synthetic notes written by individuals with medical training
meant to simulate an admission statement from an electronic health record (EHR). The `_id` for these notes is specified by the following
structure: trec-ct-{year}-{note number}, where year is either 2021 or 2022, and the 'note number' is the index number of the note from the
dataset for a particular year.
3. `MedQA-USMLE (US track) track`. This [dataset](https://paperswithcode.com/dataset/medqa-usmle) consists of 14,369 multiple-choice questions from the United States Medical Liscensing Examination (USMLE)
where a clinical summary of a patient is given and a question is asked based on the information provided. Because not all of the questions involve a patient case, we filter for the ones
involving patients and so there are 12,893 questions used from this dataset. These questions were curated as part of the MedQA dataset for examining retrieval methods for extracting relevant documents and
augmenting them with language models to help solve a question. The `_id` for these notes are specified with the following format: usmle-{question index number}, where 'question index number' is the index of the question
from the US_qbank.jsonl file in the MedQA dataset, consisting of all USMLE questions.
5. `PMC-Patients`. This [dataset](https://pmc-patients.github.io/) consists of 167,034 patient notes that were curated from PubMed Central (PMC). The purpose of this dataset is to
benchmark the performance different Retrieval-based Clinical Decision Support Systems (ReCDS). For a given patient note, this dataset evaluates a model's
ability to find similar patient notes and relevant articles from PMC. The `_id` for these notes are specified with the following format: pmc-{patient id},
where the 'patient id' is the 'patient_uid' attribute for each of the the patient notes from the `pmc-patients.json` file in the PMC-Patient dataset.
We hope this data set of patient summaries and medical examination questions can be helpful for researchers looking to benchmark the performance
of large language models (LLMs) on medical entity extraction and also benchmark LLM's performance in using these extracted entitites
to perform different medical calculations.