HealthSage AI note-to-fhir
HealthSage AI's LLM is a fine-tuned version of Meta's Llama 2 13B to create structured information - FHIR Resources - from unstructured clinical notes - plain text.
The model is optimized to process English notes and populate 10 FHIR resource types. For a full description of the scope and limitations, see the performance and limitations header below.
LoRA Adapter specs
- Base Model: "meta-llama/Llama-2-13b-chat-hf"
Usage:
Training procedure
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
Framework versions
- PEFT 0.5.0
Performance and limitations
Scope of the model
This open sourced Beta model is trained within the following scope:
- FHIR R4
- 10 Resource types:
- Bundle
- Patient
- Encounter
- Practitioner
- Organization
- Immunization
- Observation
- Condition
- AllergyIntolerance
- Procedure.
- English language
The following features are out of scope of the current release:
- Support for Coding systems such as SNOMED CT and Loinc.
- FHIR extensions and profiles
- Any language, resource type or FHIR version not mentioned under "in scope".
We are continuously training our model and will make updates available - that address some of these items and more - on a regular basis.
Furthermore, please note:
- No Relative dates: HealthSage AI Note-to-FHIR will not provide accurate FHIR datetime fields based on text that contains relative time information like "today" or "yesterday". Furthermore, relative dates like "Patient John Doe is 50 years old." will not result in an accurate birthdate estimation, since the precise birthday and -month is unknown, and since the LLM is not aware of the current date.
- Designed as Patient-centric: HealthSage AI Note-to-FHIR is trained on notes describing one patient each.
- <4k Context window: The training data for this application contained at most 3686 tokens, which is 90% of the context window for Llama-2 (4096)
- Explicit Null: If a certain FHIR element is not present in the provided text, it is explictely predicted as NULL. Explictely modeling the absence of information reduces the chance of hallucinations.
- Uses Bundles: For consistency and simplicity, all predicted FHIR resources are Bundled.
- Conservative estimates: Our model is designed to stick to the information explicitely provided in the text.
- ID's are local: ID fields and references are local enumerations (1,2,3, etc.). They are not yet tested on referential correctness.
- Generation design: The model is designed to generate a seperate resource if there is information about that resource in the text beyond what can be described in reference fields of related resources.
- Test results: Our preliminary results suggest that HealthSage AI Note-to-FHIR is superior to the GPT-4 foundation model within the scope of our application in terms of FHIR Syntax and ability to replicate the original FHIR resources in our test dataset. We are currently analyzing our model on its performance for out-of-distribution data and out-of-scope data.
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.