The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ImportError
Message:      To be able to use thbndi/Mimic4Dataset, you need to install the following dependency: git.
Please install it using 'pip install git' for instance.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1914, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1880, in dataset_module_factory
                  return HubDatasetModuleFactoryWithScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1504, in get_module
                  local_imports = _download_additional_modules(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 354, in _download_additional_modules
                  raise ImportError(
              ImportError: To be able to use thbndi/Mimic4Dataset, you need to install the following dependency: git.
              Please install it using 'pip install git' for instance.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Usage

Description

The Mimic-IV dataset generate data by executing the Pipeline available on https://github.com/healthylaife/MIMIC-IV-Data-Pipeline.

Function Signature

load_dataset('thbndi/Mimic4Dataset', task, mimic_path=mimic_data, config_path=config_file, encoding=encod, generate_cohort=gen_cohort, val_size=size, cache_dir=cache)

Arguments

  1. task (string) :

    • Description: Specifies the task you want to perform with the dataset.
    • Default: "Mortality"
    • Note: Possible Values : 'Phenotype', 'Length of Stay', 'Readmission', 'Mortality'
  2. mimic_path (string) :

    • Description: Complete path to the Mimic-IV raw data on user's machine.
    • Note: You need to provide the appropriate path where the Mimic-IV data is stored. The path should end with the version of mimic (eg : mimiciv/2.2). Supported version : 2.2 and 1.0 as provided by the authors of the pipeline.
  3. config_path (string) optionnal :

    • Description: Path to the configuration file for the cohort generation choices (more infos in '/config/readme.md').
    • Default: Configuration file provided in the 'config' folder.
  4. encoding (string) optionnal :

    • Description: Data encoding option for the features.
    • Options: "concat", "aggreg", "tensor", "raw", "text"
    • Default: "concat"
    • Note: Choose one of the following options for data encoding:
      • "concat": Concatenates the one-hot encoded diagnoses, demographic data vector, and dynamic features at each measured time instant, resulting in a high-dimensional feature vector.
      • "aggreg": Concatenates the one-hot encoded diagnoses, demographic data vector, and dynamic features, where each item_id is replaced by the average of the measured time instants, resulting in a reduced-dimensional feature vector.
      • "tensor": Represents each feature as an 2D array. There are separate arrays for labels, demographic data ('DEMO'), diagnosis ('COND'), medications ('MEDS'), procedures ('PROC'), chart/lab events ('CHART/LAB'), and output events data ('OUT'). Dynamic features are represented as 2D arrays where each row contains values at a specific time instant.
      • "raw": Provide cohort from the pipeline without any encoding for custom data processing.
      • "text": Represents diagnoses as text suitable for BERT or other similar text-based models.
      • For 'concat' and 'aggreg' the composition of the vector is given in './data/dict/"task"/features_aggreg.csv' or './data/dict/"task"/features_concat.csv' file and in 'features_names' column of the dataset.
  5. generate_cohort (bool) optionnal :

    • Description: Determines whether to generate a new cohort from Mimic-IV data.
    • Default: True
    • Note: Set it to True to generate a cohort, or False to skip cohort generation.
  6. val_size, 'test_size' (float) optionnal :

    • Description: Proportion of the dataset used for validation during training.
    • Default: 0.1 for validation size and 0.2 for testing size.
    • Note: Can be set to 0.
  7. cache_dir (string) optionnal :

    • Description: Directory where the processed dataset will be cached.
    • Note: Providing a cache directory for each encoding type can avoid errors when changing the encoding type.

Example Usage

import datasets
from datasets import load_dataset

# Example 1: Load dataset with default settings
dataset = load_dataset('thbndi/Mimic4Dataset', task="Mortality", mimic_path="/path/to/mimic_data")

# Example 2: Load dataset with custom settings
dataset = load_dataset('thbndi/Mimic4Dataset', task="Phenotype", mimic_path="/path/to/mimic_data", config_path="/path/to/config_file", encoding="aggreg", generate_cohort=False, val_size=0.2, cache_dir="/path/to/cache_dir")

Please note that the provided examples are for illustrative purposes only, and you should adjust the paths and settings based on your actual dataset and specific use case.

Downloads last month
379